Page 327 - 《软件学报》2026年第1期
P. 327
324 软件学报 2026 年第 37 卷第 1 期
[73] Singh A, Click K, Parizi RM, Zhang Q, Dehghantanha A, Choo KKR. Sidechain technologies in blockchain networks: An examination
and state-of-the-art review. Journal of Network and Computer Applications, 2020, 149: 102471. [doi: 10.1016/j.jnca.2019.102471]
[74] Dilley J, Poelstra A, Wilkins J, Piekarska M, Gorlick B, Friedenbach M. Strong federations: An interoperable blockchain solution to
centralized third-party risks. arXiv:1612.05491, 2016.
[75] Adnan M, Kalra S, Cresswell JC, Taylor GW, Tizhoosh HR. Federated learning and differential privacy for medical image analysis.
Scientific Reports, 2022, 12(1): 1953. [doi: 10.1038/s41598-022-05539-7]
[76] Kim H, Park J, Bennis M, Kim SL. Blockchained on-device federated learning. IEEE Communications Letters, 2020, 24(6): 1279–1283.
[doi: 10.1109/LCOMM.2019.2921755]
[77] Wang WQ, Tian ZY, Zhang CH, Yu S. Machine unlearning: A comprehensive survey. arXiv:2405.07406, 2024.
[78] Bourtoule L, Chandrasekaran V, Choquette-Choo CA, Jia HR, Travers A, Zhang BW, Lie D, Papernot N. Machine unlearning. In: Proc.
of the 2021 IEEE Symp. on Security and Privacy (SP). San Francisco: IEEE, 2021. 141–159. [doi: 10.1109/SP40001.2021.00019]
[79] Cao YZ, Yang JF. Towards making systems forget with machine unlearning. In: Proc. of the 2015 IEEE Symp. on Security and Privacy
(SP). San Jose: IEEE, 2015. 463–480. [doi: 10.1109/SP.2015.35]
[80] Guo CA, Goldstein T, Hannun AY, van der Maaten L. Certified data removal from machine learning models. In: Proc. of the 37th Int’l
Conf. on Machine Learning. OpenReview.net, 2020. 3832–3842.
[81] Sekhari A, Acharya J, Kamath G, Suresh AT. Remember what you want to forget: Algorithms for machine unlearning. In: Proc. of the
35th Int’l Conf. on Neural Information Processing Systems. Curran Associates Inc., 2021. 1383.
[82] Wu C, Zhu SC, Mitra P. Federated unlearning with knowledge distillation. arXiv:2201.09441, 2022.
[83] Liu Y, Xu L, Yuan XL, Wang C, Li B. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In:
Proc. of the 2022 IEEE Conf. on Computer Communications. London: IEEE, 2022. 1749–1758. [doi: 10.1109/INFOCOM48880.2022.
9796721]
[84] Wang JX, Guo S, Xie X, Qi H. Federated unlearning via class-discriminative pruning. In: Proc. of the 2022 ACM Web Conf. ACM,
2022. 622–632. [doi: 10.1145/3485447.3512222]
[85] Chen M, Zhang ZK, Wang TH, Backes M, Humbert M, Zhang Y. When machine unlearning jeopardizes privacy. In: Proc. of the 2021
ACM SIGSAC Conf. on Computer and Communications Security (CCS). ACM, 2021. 896–911. [doi: 10.1145/3460120.3484756]
[86] Zanella-Béguelin S, Wutschitz L, Tople S, Rühle V, Paverd A, Ohrimenko O, Köpf B, Brockschmidt M. Analyzing information leakage
of updates to natural language models. In: Proc. of the 2020 ACM SIGSAC Conf. on Computer and Communications Security. ACM,
2020. 363–375. [doi: 10.1145/3372297.3417880]
[87] Sun ZK, Ruan N, Li JH. DDL: Effective and comprehensible interpretation framework for diverse Deepfake detectors. IEEE Trans. on
Information Forensics and Security, 2025, 20: 3601–3615. [doi: 10.1109/TIFS.2025.3553803]
[88] Mirsky Y, Lee W. The creation and detection of Deepfakes: A survey. ACM Computing Surveys (CSUR), 2021, 54(1): 7. [doi: 10.1145/
3425780]
[89] Hu EJ, Shen YL, Wallis P, Allen-Zhu Z, Li YZ, Wang S, Wang L, Chen WZ. LoRA: Low-rank adaptation of large language models. In:
Proc. of the 10th Int’l Conf. on Learning Representations. OpenReview.net, 2022.
[90] Ruiz N, Li YZ, Jampani V, Pritch Y, Rubinstein M, Aberman K. DreamBooth: Fine tuning text-to-image diffusion models for subject
driven generation. In: Proc. of the 2023 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023.
22500–22510. [doi: 10.1109/CVPR52729.2023.02155]
[91] Shan S, Cryan J, Wenger E, Zheng HT, Hanocka R, Zhao BY. Glaze: Protecting artists from style mimicry by text-to-image models. In:
Proc. of the 32nd USENIX Security Symp. Anaheim: USENIX Association, 2023. 2187–2204.
[92] Liang CM, Wu XY, Hua Y, Zhang JR, Xue YM, Song T, Xue ZG, Ma RH, Guan HB. Adversarial example does good: Preventing
painting imitation from diffusion models via adversarial examples. In: Proc. of the 40th Int’l Conf. on Machine Learning. Honolulu:
ICML, 2023. 20763–20786.
[93] Sun ZK, Liu ZJ, Ji SL, Lin CH, Ruan N. Pretender: Universal active defense against diffusion finetuning attacks. In: Proc. of the 34th
USENIX Security Symp. Seattle: USENIX Association, 2025.
[94] Hayet I, Yao ZJ, Luo B. Invernet: An inversion attack framework to infer fine-tuning datasets through word embeddings. In: Proc. of the
2022 Findings of the Association for Computational Linguistics: EMNLP 2022. Abu Dhabi: ACL, 2022. 5009–5018. [doi: 10.18653/v1/
2022.findings-emnlp.368]
[95] Li HR, Xu MS, Song YQ. Sentence embedding leaks more information than you expect: Generative embedding inversion attack to
recover the whole sentence. In: Proc. of the 61st Annual Meeting of the Association for Computational Linguistics (ACL). Toronto:
ACL, 2023. 14022–14040. [doi: 10.18653/v1/2023.findings-acl.881]

