Page 256 - 《软件学报》2025年第4期
P. 256
1662 软件学报 2025 年第 36 卷第 4 期
[28] Chen YT, Xiong J, Xu WH, Zuo JW. A novel online incremental and decremental learning algorithm based on variable support vector
machine. Cluster Computing, 2019, 22(S3): 7435–7445. [doi: 10.1007/s10586-018-1772-4]
[29] Khan ME, Swaroop S. Knowledge-adaptation priors. In: Proc. of the 35th Int’l Conf. on Neural Information Processing Systems. Curran
Associates Inc., 2021. 19757–19770.
[30] Baumhauer T, Schöttle P, Zeppelzauer M. Machine unlearning: Linear filtration for logit-based classifiers. Machine Learning, 2022,
111(9): 3203–3226. [doi: 10.1007/s10994-022-06178-9]
[31] Kim J, Woo SS. Efficient two-stage model retraining for machine unlearning. In: Proc. of the 2022 IEEE/CVF Conf. on Computer Vision
and Pattern Recognition Workshops. New Orleans: IEEE, 2022. 4360–4368. [doi: 10.1109/CVPRW56347.2022.00482]
[32] Neel S, Roth A, Sharifi-Malvajerdi S. Descent-to-delete: Gradient-based methods for machine unlearning. In: Proc. of the 32nd Int’l
Conf. on Algorithmic Learning Theory. 2021. 931–962.
[33] Nguyen QP, Oikawa R, Divakaran DM, Chan MC, Low BKH. Markov chain Monte Carlo-based machine unlearning: Unlearning what
needs to be forgotten. In: Proc. of the 2022 ACM on Asia Conf. on Computer and Communications Security. Nagasaki: ACM, 2022.
351–363. [doi: 10.1145/3488932.3517406]
[34] Zeng YY, Wang TH, Chen S, Just HA, Jin R, Jia RX. Learning to refit for convex learning problems. arXiv:2111.12545, 2022.
[35] Gao J, Garg S, Mahmoody M, Vasudevan PN. Deletion inference, reconstruction, and compliance in machine (un)learning. Proc. on
Privacy Enhancing Technologies, 2022(3): 415–436. [doi: 10.56553/popets-2022-0079]
[36] Marchant NG, Rubinstein BIP, Alfeld S. Hard to forget: Poisoning attacks on certified machine unlearning. In: Proc. of the 36th AAAI
Conf. on Artificial Intelligence. Virtually: AAAI, 2022. 7691–7700. [doi: 10.1609/aaai.v36i7.20736]
[37] Hu HS, Wang S, Chang JM, Zhong HN, Sun RX, Hao S, Zhu HJ, Xue MH. A duty to forget, a right to be assured? Exposing
vulnerabilities in machine unlearning services. arXiv:2309.08230, 2024.
[38] Yoon Y, Nam J, Yun H, Kim D, Ok J. Few-shot unlearning by model inversion. arXiv:2205.15567, 2023.
[39] Wu YJ, Dobriban E, Davidson S. DeltaGrad: Rapid retraining of machine learning models. In: Proc. of the 37th Int’l Conf. on Machine
Learning. 2020. 10355–10366.
[40] Ginart AA, Guan MY, Valiant G, Zou J. Making AI forget you: Data deletion in machine learning. In: Proc. of the 33rd Int’l Conf. on
Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2019. 3518–3531.
[41] Fu SP, He FX, Tao DC. Knowledge removal in sampling-based Bayesian inference. arXiv:2203.12964, 2022.
[42] Mehta R, Pal S, Singh V, Ravi SN. Deep unlearning via randomized conditionally independent Hessians. In: Proc. of the 2022 IEEE/CVF
Conf. on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022. 10412–10421. [doi: 10.1109/CVPR52688.2022.01017]
[43] Izzo Z, Smart MA, Chaudhuri K, Zou J. Approximate data deletion from machine learning models. In: Proc. of the 24th Int’l Conf. on
Artificial Intelligence and Statistics. 2021. 2008–2016.
[44] Huang HX, Ma XJ, Erfani SM, Bailey J, Wang YS. Unlearnable examples: Making personal data unexploitable. arXiv:2101.04898, 2021.
[45] Fu SP, He FZ, Xu Y, Tao DC. Bayesian inference forgetting. arXiv:2101.06417, 2021.
[46] Kurmanji M, Triantafillou P, Hayes J, Triantafillou E. Towards unbounded machine unlearning. In: Proc. of the 37th Int’l Conf. on
Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2023. 1957–1987.
[47] Parne N, Puppaala K, Bhupathi N, Patgiri R. An investigation on learning, polluting, and unlearning the spam emails for lifelong learning.
arXiv:2111.14609, 2021.
[48] Ye JW, Fu YF, Song J, Yang XY, Liu SH, Jin X, Song ML, Wang XC. Learning with recoverable forgetting. In: Proc. of the 17th
European Conf. on Computer Vision. Tel Aviv: Springer, 2022. 87–103. [doi: 10.1007/978-3-031-20083-0_6]
[49] Chen KY, Wang YW, Huang Y. Lightweight machine unlearning in neural network. arXiv:2111.05528, 2021.
[50] Aldaghri N, Mahdavifar H, Beirami A. Coded machine unlearning. IEEE Access, 2021, 9: 88137–88150. [doi: 10.1109/ACCESS.2021.
3090019]
[51] Yan HN, Li XG, Guo ZY, Li H, Li FH, Lin XD. ARCANE: An efficient architecture for exact machine unlearning. In: Proc. of the 31st
Int’l Joint Conf. on Artificial Intelligence (IJCAI 2022). Vienna: Morgan Kaufmann, 2022. 4006–4013. [doi: 10.24963/ijcai.2022/556]
[52] He YZ, Meng GZ, Chen K, He JW, Hu XB. DeepObliviate: A powerful charm for erasing data residual memory in deep neural networks.
arXiv:2105.06209, 2021.
[53] Gupta V, Jung C, Neel S, Roth A, Sharifi-Malvajerdi S, Waites C. Adaptive machine unlearning. In: Proc. of the 35th Int’l Conf. on
Neural Information Processing Systems. Curran Associates Inc., 2021. 16319–16330.
[54] Chundawat VS, Tarun AK, Mandal M, Kankanhalli M. Zero-shot machine unlearning. arXiv:2201.05629, 2023.
[55] Chen C, Sun F, Zhang M, Ding BL. Recommendation unlearning. In: Proc. of the 2022 ACM Web Conf. Virtual Event: ACM, 2022.
2768–2777. [doi: 10.1145/3485447.3511997]