Page 257 - 《软件学报》2025年第4期
P. 257
李梓童 等: 机器遗忘综述 1663
[56] Golatkar A, Achille A, Ravichandran A, Polito M, Soatto S. Mixed-privacy forgetting in deep networks. In: Proc. of the 2021 IEEE/CVF
Conf. on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021. 792–801. [doi: 10.1109/CVPR46437.2021.00085]
[57] Peste A, Alistarh D, Lampert CH. SSSE: Efficiently erasing samples from trained machine learning models. arXiv:2107.03860, 2021.
[58] Golatkar A, Achille A, Soatto S. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output
observations. In: Proc. of the 16th European Conf. on Computer Vision (ECCV 2020). Glasgow: Springer, 2020. 383–398. [doi: 10.1007/
978-3-030-58526-6_23]
[59] Dwork C, Lei J. Differential privacy and robust statistics. In: Proc. of the 41st Annual ACM Int’l Symp. on Theory of Computing.
Bethesda: ACM, 2009. 371–380. [doi: 10.1145/1536414.1536466]
[60] Liu JX, Xue MS, Lou J, Zhang XY, Xiong L, Qin Z. MUter: Machine unlearning on adversarially trained models. In: Proc. of the 2023
IEEE/CVF Int’l Conf. on Computer Vision. Paris: IEEE, 2023. 4869–2879. [doi: 10.1109/ICCV51070.2023.00451]
[61] Du M, Chen Z, Liu C, Oak R, Song D. Lifelong anomaly detection through unlearning. In: Proc. of the 2019 ACM SIGSAC Conf. on
Computer and Communications Security. London: ACM. 2019. 1283–1297. [doi: 10.1145/3319535.3363226]
[62] Liu Y, Ma Z, Liu XM, Liu J, Jiang ZY, Ma JF, Yu P, Ren K. Learn to forget: Machine unlearning via neuron masking. arXiv:2003.10933,
2021.
[63] Ganhör C, Penz D, Rekabsaz N, Lesota O, Schedl M. Unlearning protected user attributes in recommendations with adversarial training.
In: Proc. of the 45th Int’l ACM SIGIR Conf. on Research and Development in Information Retrieval. Madrid: ACM, 2022. 2142–2147.
[doi: 10.1145/3477495.3531820]
[64] Wu G, Hashemi M, Srinivasa C. PUMA: Performance unchanged model augmentation for training data removal. In: Proc. of the 36th
AAAI Conf. on Artificial Intelligence. Virtually: AAAI, 2022. 8675–8682. [doi: 10.1609/aaai.v36i8.20846]
[65] Ye DY, Zhu TQ, Zhu CC, Wang DR, Shi ZW, Shen S, Zhou WL, Xue MH. Reinforcement unlearning. arXiv:2312.15910, 2024.
[66] Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proc.
of the 27th Int’l Conf. on Neural Information Processing Systems. Montreal: MIT Press, 2014. 2672–2680.
[67] Ullah E, Mai T, Rao A, Rossi RA, Arora R. Machine unlearning via algorithmic stability. In: Proc. of the 34th Conf. on Learning Theory.
2021. 4126–4142.
[68] Nguyen QP, Kian B, Low H, Jaillet P. Variational Bayesian unlearning. In: Proc. of the 34th Int’l Conf. on Neural Information Processing
Systems. Vancouver: Curran Associates Inc., 2020. 16025–16036.
[69] Chen KY, Huang Y, Wang YW. Machine unlearning via GAN. arXiv:2111.11869, 2021.
[70] Chundawat VS, Tarun AK, Mandal M, Kankanhalli M. Can bad teaching induce forgetting? Unlearning in deep networks using an
incompetent teacher. arXiv:2205.08096, 2023.
[71] Zhang PF, Bai GD, Huang Z, Xu XS. Machine unlearning for image retrieval: A generative scrubbing approach. In: Proc. of the 30th
ACM Int’l Conf. on Multimedia. Lisboa: ACM, 2022. 237–245. [doi: 10.1145/3503161.3548378]
[72] Thudi A, Jia HR, Shumailov I, Papernot N. On the necessity of auditable algorithmic definitions for machine unlearning. In: Proc. of the
31st USENIX Security Symp. (USENIX Security 22). Boston: USENIX Association, 2022. 4007–4022.
[73] Huang Y, Li XX, Li K. EMA: Auditing data removal from trained models. In: Proc. of the 24th Int’l Conf. on Medical Image Computing
and Computer Assisted Intervention (MICCAI 2021). Strasbourg: Springer, 2021. 793–803. [doi: 10.1007/978-3-030-87240-3_76]
[74] Goel S, Prabhu A, Kumaraguru P. Evaluating inexact unlearning requires revisiting forgetting. arXiv:2201.06640, 2023.
[75] Warnecke A, Pirch L, Wressnegger C, Rieck K. Machine unlearning of features and labels. arXiv:2108.11577, 2023.
[76] Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc. of the IEEE, 1998, 86(11):
2278–2324. [doi: 10.1109/5.726791]
[77] Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. Technical Report, University of Toronto, 2009. https://
www.cs.toronto.edu/%7Ekriz/cifar.html
[78] Sakar CO, Polat SO, Katircioglu M, Kastro Y. Real-time prediction of online shoppers’ purchasing intention using multilayer perceptron
and LSTM recurrent neural networks. Neural Computing & Applications, 2019, 31(10): 6893–6908. [doi: 10.1007/s00521-018-3523-0]
[79] Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. In: Proc. of the 2009 IEEE
Conf. on Computer Vision and Pattern Recognition. Miami: IEEE, 2009. 248–255. [doi: 10.1109/CVPR.2009.5206848]
[80] Xiao H, Rasul K, Vollgraph R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms.
arXiv:1708.07747, 2017.
[81] Goodfellow IJ, Bulatov Y, Ibarz J, Arnoud S, Shet V. Multi-digit number recognition from street view imagery using deep convolutional
neural networks. arXiv:1312.6082, 2014.
[82] Sekhari A, Acharya J, Kamath G, Suresh AT. Remember what you want to forget: Algorithms for machine unlearning. In: Proc. of the