Page 368 - 《软件学报》2025年第5期
P. 368
2268 软件学报 2025 年第 36 卷第 5 期
[8] Cao XY, Fang MH, Liu J, Gong NZ. FLTrust: Byzantine-robust federated learning via trust bootstrapping. arXiv:2012.13995v1, 2020.
[9] Bagdasaryan E, Veit A, Hua YQ, Estrin D, Shmatikov V. How to backdoor federated learning. In: Proc. of the 23rd Int’l Conf. on
Artificial Intelligence and Statistics. Palermo: AISTATS, 2020. 2938–2948.
[10] Wang Y, Li GL, Li KY. Survey on contribution evaluation for federated learning. Ruan Jian Xue Bao/Journal of Software, 2023, 34(3):
1168–1192 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6786.htm [doi: 10.13328/j.cnki.jos.006786]
[11] Zhao BW, Liu XM, Chen WN. When crowdsensing meets federated learning: Privacy-preserving mobile crowdsensing system.
arXiv:2102.10109, 2021.
[12] Lv HT, Zheng ZZ, Luo T, Wu F, Tang SJ, Hua LF, Jia RF, Lv CF. Data-free evaluation of user contributions in federated learning. In:
Proc. of the 19th Int’l Symp. on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). Philadelphia: IEEE,
2021. 1–8. [doi: 10.23919/WiOpt52861.2021.9589136]
[13] Shapley LS. A value for n-person games. In: Kuhn HW, Tucker AW, eds. Contributions to the Theory of Games. Princeton: Princeton
University Press, 1953. 307–318. [doi: 10.1515/9781400881970-018]
[14] Wang TH, Rausch J, Zhang C, Jia RX, Song D. A principled approach to data valuation for federated learning. In: Yang Q, Fan LX, Yu
H, eds. Federated Learning: Privacy and Incentive. Cham: Springer, 2020. 153–167. [doi: 10.1007/978-3-030-63076-8_11]
[15] Ghorbani A, Zou JY. Data Shapley: Equitable valuation of data for machine learning. In: Proc. of the 36th Int’l Conf. on Machine
Learning. Long Beach: ICML, 2019. 2242–2251.
[16] Liu ZL, Chen YY, Yu H, Liu Y, Cui LZ. GTG-Shapley: Efficient and accurate participant contribution evaluation in federated learning.
ACM Trans. on Intelligent Systems and Technology, 2022, 13(4): 60. [doi: 10.1145/3501811]
[17] Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks
against centralized and federated learning. In: Proc. of the 2019 IEEE Symp. on Security and Privacy (SP). San Francisco: IEEE, 2019.
739–753. [doi: 10.1109/SP.2019.00065]
[18] Hitaj B, Ateniese G, Perez-Cruz F. Deep models under the GAN: Information leakage from collaborative deep learning. In: Proc. of the
2017 ACM SIGSAC Conf. on Computer and Communications Security. Dallas: ACM, 2017. 603–618. [doi: 10.1145/3133956.3134012]
[19] Cao XY, Gong NZ. MPAF: Model poisoning attacks to federated learning based on fake clients. In: Proc. of the 2022 IEEE/CVF Conf.
on Computer Vision and Pattern Recognition (CVPR) Workshops. New Orleans: IEEE, 2022. 3395–3403. [doi: 10.1109/CVPRW56347.
2022.00383]
[20] Lyu LJ, Yu H, Ma XJ, Chen C, Sun LC, Zhao J, Yang Q, Yu PS. Privacy and robustness in federated learning: Attacks and defenses.
IEEE Trans. on Neural Networks and Learning Systems, 2024, 35(7): 8726–8746. [doi: 10.1109/TNNLS.2022.3216981]
[21] Gu YH, Bai YB. Survey on security and privacy of federated learning models. Ruan Jian Xue Bao/Journal of Software, 2023, 34(6):
2833–2864 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6658.htm [doi: 10.13328/j.cnki.jos.006658]
[22] Tang LT, Chen ZN, Zhang LF, Wu D. Research progress of privacy issues in federated learning. Ruan Jian Xue Bao/Journal of Software,
2023, 34(1): 197–229 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6411.htm [doi: 10.13328/j.cnki.jos.006411]
[23] Liu YX, Chen H, Liu YH, Li CP. Privacy-preserving techniques in federated learning. Ruan Jian Xue Bao/Journal of Software, 2022,
33(3): 1057–1092 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6446.htm [doi: 10.13328/j.cnki.jos.006446]
[24] Tan ZW, Zhang LF. Survey on privacy preserving techniques for machine learning. Ruan Jian Xue Bao/Journal of Software, 2020, 31(7):
2127–2156 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6052.htm [doi: 10.13328/j.cnki.jos.006052]
[25] Wei LF, Chen CC, Zhang L, Li MS, Chen YJ, Wang Q. Security issues and privacy preserving in machine learning. Journal of Computer
Research and Development, 2020, 57(10): 2066–2085 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.2020.20200426]
[26] Xiao H, Rasul K, Vollgraf R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747,
2017.
[27] Krizhevsky A. Learning multiple layers of features from tiny images [MS. Thesis]. Toronto: University of Toronto, 2009. [doi: 10.1.1.222.
9220]
[28] Fang MH, Cao XY, Jia JY, Gong NZ. Local model poisoning attacks to Byzantine-robust federated learning. In: Proc. of the 29th
USENIX Security Symp. USENIX, 2020. 1605–1622.
[29] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Proc. of the 3rd Int’l Conf. on
Learning Representations. San Diego: ICLR, 2015.
[30] Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. In: Proc. of the 2009 IEEE
Conf. on Computer Vision and Pattern Recognition. Miami: IEEE, 2009. 248–255. [doi: 10.1109/CVPR.2009.5206848]