Page 255 - 《软件学报》2025年第4期
P. 255

李梓童 等: 机器遗忘综述                                                                   1661


                     Event: ACM, 2020. 363–375. [doi: 10.1145/3372297.3417880]
                  [4]  Shokri R, Stronati M, Song CZ, Shmatikov V. Membership inference attacks against machine learning models. In: Proc. of the 2017 IEEE
                     Symp. on Security and Privacy. San Jose: IEEE, 2017. 3–18. [doi: 10.1109/SP.2017.41]
                  [5]  Newman AL. What the “right to be forgotten” means for privacy in a digital age. Science, 2015, 347(6221): 507–508. [doi: 10.1126/
                     science.aaa4603]
                  [6]  General  Administration  of  Quality  Supervision,  Inspection  and  Quarantine  of  the  People’s  Republic  of  China,  Standardization
                     Administration.  GB/T  35273-2020  Information  security  technology —Personal  information  security  specification.  Beijing:  Standards
                     Press of China, 2018 (in Chinese). https://std.samr.gov.cn/gb/search/gbDetailed?id=A0280129495AEBB4E05397BE0A0AB6FE
                  [7]  European Commission. Regulation of the European Parliament and of the Council on the protection of individuals with regard to the
                     processing  of  personal  data  and  on  the  free  movement  of  such  data,  and  repealing  Directive  95/46/EC  (General  Data  Protection
                     Regulation). 2012. https://gdpr.eu/article-17-right-to-be-forgotten/
                  [8]  Kwak C, Lee J, Lee H. Forming a dimension of digital human rights: Research agenda for the right to be forgotten. In: Proc. of the 50th
                     Hawaii Int’l Conf. on System Sciences. Hilton Waikoloa Village: IEEE, 2017. 982–989.
                  [9]  Cao YZ, Yang JF. Towards making systems forget with machine unlearning. In: Proc. of the 2015 IEEE Symp. on Security and Privacy.
                     San Jose: IEEE, 2015. 463–480. [doi: 10.1109/SP.2015.35]
                     Engineering, 2023, 35(5): 4646–4667. [doi: 10.1109/TKDE.2022.3148237]
                 [10]  Sharir O, Peleg B, Shoham Y. The cost of training NLP models: A concise overview. arXiv:2004.08900, 2020.
                 [11]  Guo C, Goldstein T, Hannun A, van der Maaten L. Certified data removal from machine learning models. In: Proc. of the 37th Int’l Conf.
                     on Machine Learning. 2020. 3832–3842.
                 [12]  Liu Y, Fan MY, Chen C, Liu XM, Ma Z, Wang L, Ma JF. Backdoor defense with machine unlearning. In: Proc. of the 2022 IEEE Conf.
                     on Computer Communications (INFOCOM 2022). London: IEEE, 2022. 280–289. [doi: 10.1109/INFOCOM48880.2022.9796974]
                 [13]  Cao YZ, Yu AF, Adat A, Stahl E, Merwine J, Yang JF. Efficient repair of polluted machine learning systems via causal unlearning. In:
                     Proc.  of  the  2018  Asia  Conf.  on  Computer  and  Communications  Security.  Incheon:  ACM,  2018.  735–747.  [doi:  10.1145/3196494.
                     3196517]
                 [14]  Wang BL, Yao YS, Shan S, Li HY, Viswanath B, Zheng HT, Zhao BY. Neural cleanse: Identifying and mitigating backdoor attacks in
                     neural networks. In: Proc. of the 2019 IEEE Symp. on Security and Privacy. San Francisco: IEEE, 2019. 707–723. [doi: 10.1109/SP.2019.
                     00031]
                 [15]  Nguyen TT, Huynh TT, Nguyen PL, Liew AWC, Yin HZ, Nguyen QVH. A survey of machine unlearning. arXiv:2209.02299, 2022.
                 [16]  Xu  J,  Wu  ZH,  Wang  C,  Jia  XH.  Machine  unlearning:  Solutions  and  challenges.  IEEE  Trans.  on  Emerging  Topics  in  Computational
                     Intelligence, 2024, 8(3): 2150–2168. [doi: 10.1109/TETCI.2024.3379240]
                 [17]  Zhang HB, Nakamura T, Isohara T, Sakurai K. A review on machine unlearning. SN Computer Science, 2023, 4(4): 337. [doi: 10.1007/
                     s42979-023-01767-4]
                 [18]  Xu H, Zhu TQ, Zhang LF, Zhou WL, Yu PS. Machine unlearning: A survey. ACM Computing Surveys, 2023, 56(1): 9. [doi: 10.1145/
                     3603620]
                 [19]  Liu X, Tsaftaris SA. Have you forgotten? A method to assess if machine learning models have forgotten data. In: Proc. of the 23rd Int’l
                     Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI 2020). Lima: Springer, 2020. 95–105. [doi: 10.1007/
                     978-3-030-59710-8_10]
                 [20]  Golatkar  A,  Achille  A,  Soatto  S.  Eternal  sunshine  of  the  spotless  net:  Selective  forgetting  in  deep  networks.  In:  Proc.  of  the  2020
                     IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 9301–9309. [doi: 10.1109/CVPR42600.2020.00932]
                 [21]  Thudi A, Deza G, Chandrasekaran V, Papernot N. Unrolling SGD: Understanding factors influencing machine unlearning. In: Proc. of the
                     7th IEEE European Symp. on Security and Privacy (EuroS&P). Genoa: IEEE, 2022. 303–319. [doi: 10.1109/EuroSP53844.2022.00027]
                 [22]  Chai CL, Wang JY, Luo YY, Niu ZP, Li GL. Data management for machine learning: A survey. IEEE Trans. on Knowledge and Data
                 [23]  Liu B, Liu Q, Stone P. Continual learning and private unlearning. arXiv:2203.12817, 2022.
                 [24]  Brophy J, Lowd D. Machine unlearning for random forests. In: Proc. of the 38th Int’l Conf. on Machine Learning. 2021. 1092–1104.
                 [25]  Schelter S, Grafberger S, Dunning T. HedgeCut: Maintaining randomised trees for low-latency machine unlearning. In: Proc. of the 2021
                     Int’l Conf. on Management of Data. Virtual Event: ACM, 2021. 1545–1557. [doi: 10.1145/3448016.3457239]
                 [26]  Bourtoule L, Chandrasekaran V, Choquette-Choo CA, Jia HR, Travers A, Zhang BW, Lie D, Papernot N. Machine unlearning. In: Proc.
                     of the 2021 IEEE Symp. on Security and Privacy. San Francisco: IEEE, 2021. 141–159. [doi: 10.1109/SP40001.2021.00019]
                 [27]  Felps DL, Schwickerath AD, Williams JD, Vuong TN, Briggs A, Hunt M, Sakmar E, Saranchak DD, Shumaker T. Class clown: Data
                     redaction in machine unlearning at enterprise scale. arXiv:2012.04699, 2020.
   250   251   252   253   254   255   256   257   258   259   260