Page 428 - 《软件学报》2025年第10期
P. 428
张锦弘 等: 基于视觉特征解耦的无数据依赖模型窃取攻击方法 4825
为一个亟待解决的重要研究方向.
References:
[1] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks.
arXiv:1312.6199, 2014.
[2] Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv:1412.6572, 2015.
[3] Pan WW, Wang XY, Song ML, Chen C. Survey on generating adversarial examples. Ruan Jian Xue Bao/Journal of Software, 2020,
31(1): 67–81 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5884.htm [doi: 10.13328/j.cnki.jos.005884]
[4] Ren K, Meng QR, Yan SK, Qin Z. Survey of artificial intelligence data security and privacy protection. Chinese Journal of Network and
Information Security, 2021, 7(1): 1–10 (in Chinese with English abstract). [doi: 10.11959/j.issn.2096-109x.2021001]
[5] Heo B, Lee M, Yun S, Choi JY. Knowledge distillation with adversarial samples supporting decision boundary. In: Proc. of the 33rd
AAAI Conf. on Artificial Intelligence. Honolulu: AAAI Press, 2019. 3771–3778. [doi: 10.1609/aaai.v33i01.33013771]
[6] Wang L, Yoon KJ. Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks. IEEE Trans.
on Pattern Analysis and Machine Intelligence, 2022, 44(6): 3048–3068. [doi: 10.1109/TPAMI.2021.3055564]
[7] Haroush M, Hubara I, Hoffer E, Soudry D. The knowledge within: Methods for data-free model compression. In: Proc. of the 2020
IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 8491–8499. [doi: 10.1109/CVPR42600.2020.00852]
[8] Geirhos R, Rubisch P, Michaelis C, Bethge M, Wichmann FA, Brendel W. ImageNet-trained CNNs are biased towards texture; increasing
shape bias improves accuracy and robustness. arXiv:1811.12231, 2022.
[9] Wang HH, Wu XD, Huang ZY, Xing EP. High-frequency component helps explain the generalization of convolutional neural networks.
In: Proc. of the 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 8681–8691. [doi: 10.1109/
CVPR42600.2020.00871]
[10] Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world. arXiv:1607.02533, 2017.
[11] Dong YP, Pang TY, Su H, Zhu J. Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proc. of the
2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 4307–4316. [doi: 10.1109/CVPR.2019.
00444]
[12] Xie CH, Zhang ZS, Zhou YY, Bai S, Wang JY, Ren Z, Yuille AL. Improving transferability of adversarial examples with input diversity.
In: Proc. of the 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 2725–2734. [doi: 10.1109/
CVPR.2019.00284]
[13] Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T. Stealing machine learning models via prediction APIs. In: Proc. of the 25th
USENIX Security Symp. Austin: USENIX Association, 2016. 601–618.
[14] Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A. Practical black-box attacks against machine learning. In: Proc. of the
2017 ACM on Asia Conf. on Computer and Communications Security. Abu Dhabi: ACM, 2017. 506–519. [doi: 10.1145/3052973.
3053009]
[15] Orekondy T, Schiele B, Fritz M. Knockoff nets: Stealing functionality of black-box models. In: Proc. of the 2019 IEEE/CVF Conf. on
Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 4949–4958. [doi: 10.1109/CVPR.2019.00509]
[16] Yang PP, Wu QL, Zhang XM. Efficient model extraction by data set stealing, balancing, and filtering. IEEE Internet of Things Journal,
2023, 10(24): 22717–22725. [doi: 10.1109/JIOT.2023.3304345]
[17] He JP, Gao HC, Zhou YY. Enhancing data-free model stealing attack on robust models. In: Proc. of the 2024 Int’l Joint Conf. on Neural
Networks (IJCNN). Yokohama: IEEE, 2024. 1–8. [doi: 10.1109/IJCNN60899.2024.10650742]
[18] Zhou MY, Wu J, Liu YP, Liu SC, Zhu C. DaST: Data-free substitute training for adversarial attacks. In: Proc. of the 2020 IEEE/CVF
Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 231–240. [doi: 10.1109/CVPR42600.2020.00031]
[19] Truong JB, Maini P, Walls RJ, Papernot N. Data-free model extraction. In: Proc. of the 2021 IEEE/CVF Conf. on Computer Vision and
Pattern Recognition. Nashville: IEEE, 2021. 4769–4778. [doi: 10.1109/CVPR46437.2021.00474]
[20] Wang WX, Yin BJ, Yao TP, Zhang L, Fu YW, Ding SH, Li JL, Huang FY, Xue XY. Delving into data: Effectively substitute training for
black-box attack. In: Proc. of the 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021.
4759–4768. [doi: 10.1109/CVPR46437.2021.00473]
[21] Kariyappa S, Prakash A, Qureshi MK. MAZE: Data-free model stealing attack using zeroth-order gradient estimation. In: Proc. of the
2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021. 13809–13818. [doi: 10.1109/CVPR46437.
2021.01360]
[22] Yu MR, Sun SL. FE-DaST: Fast and effective data-free substitute training for black-box adversarial attacks. Computers & Security, 2022,

