Page 481 - 《软件学报》2025年第10期
P. 481

4878                                                      软件学报  2025  年第  36  卷第  10  期


                     Knowledge and Data Engineering, 2024, 36(1): 335–355. [doi: 10.1109/TKDE.2023.3282907]
                  [5]  Szegedy  C,  Zaremba  W,  Sutskever  I,  Bruna  J,  Erhan  D,  Goodfellow  I,  Fergus  R.  Intriguing  properties  of  neural  networks.  arXiv:
                     1312.6199, 2014.
                  [6]  Mao JG, Shi SS, Wang XG, Li HS. 3D object detection for autonomous driving: A comprehensive survey. Int’l Journal of Computer
                     Vision, 2023, 131(8): 1909–1963. [doi: 10.1007/s11263-023-01790-1]
                  [7]  Nguyen K, Proença H, Alonso-Fernandez F. Deep learning for iris recognition: A survey. ACM Computing Surveys, 2024, 56(9): 223.
                     [doi: 10.1145/3651306]
                  [8]  Sun SH, Goldgof GM, Butte A, Alaa AM. Aligning synthetic medical images with clinical knowledge using human feedback. In: Proc. of
                     the 37th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2023. 13408–13428.
                  [9]  Gong ZT, Wang WL. Adversarial and clean data are not twins. In: Proc. of the 6th Int’l Workshop on Exploiting Artificial Intelligence
                     Techniques for Data Management. Seattle: ACM, 2023. 6. [doi: 10.1145/3593078.3593935]
                 [10]  Papernot N, McDaniel P, Wu X, Jha S, Swami A. Distillation as a defense to adversarial perturbations against deep neural networks. In:
                     Proc. of the 2016 IEEE Symp. on Security and Privacy. San Jose: IEEE, 2016. 582–597. [doi: 10.1109/SP.2016.41]
                 [11]  Liang B, Li HC, Su MQ, Li XR, Shi WC, Wang XF. Detecting adversarial image examples in deep neural networks with adaptive noise
                     reduction. IEEE Trans. on Dependable and Secure Computing, 2021, 18(1): 72–85. [doi: 10.1109/TDSC.2018.2874243]
                 [12]  Zhou T, Gan R, Xu DW, Wang JY, Xuan Q. Survey on adversarial example detection of images. Ruan Jian Xue Bao/Journal of Software,
                     2024, 35(1): 185–219 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6834.htm [doi: 10.13328/j.cnki.jos.006834]
                 [13]  Hendrycks D, Gimpel K. Early methods for detecting adversarial images. arXiv:1608.00530, 2017.
                 [14]  Feinman R, Curtin RR, Shintre S, Gardner AB. Detecting adversarial samples from artifacts. arXiv:1703.00410, 2017.
                 [15]  Lust J, Condurache AP. GraN: An efficient gradient-norm based detector for adversarial and misclassified examples. arXiv:2004.09179,
                     2020.
                 [16]  Liu H, Zhao B, Guo JB, Zhang KH, Liu P. A lightweight unsupervised adversarial detector based on autoencoder and isolation forest.
                     Pattern Recognition, 2024, 147: 110127. [doi: 10.1016/j.patcog.2023.110127]
                 [17]  Wang  YC,  Li  XG,  Yang  L,  Ma  JF,  Li  H.  ADDITION:  Detecting  adversarial  examples  with  image-dependent  noise  reduction.  IEEE
                     Trans. on Dependable and Secure Computing, 2024, 21(3): 1139–1154. [doi: 10.1109/TDSC.2023.3269012]
                 [18]  Zheng ZH, Hong PY. Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks. In: Proc. of the
                     32nd Int’l Conf. on Neural Information Processing Systems. Montréal: Curran Associates Inc., 2018. 7924–7933.
                 [19]  Eniser HF, Christakis M, Wüstholz V. RAID: Randomized adversarial-input detection for neural networks. arXiv:2002.02776, 2020.
                 [20]  Ma SQ, Liu YQ, Tao GH, Lee WC, Zhang XY. NIC: Detecting adversarial samples with neural network invariant checking. In: Proc. of
                     the 2019 Network and Distributed System Security Symp. San Diego, 2019. 1–15. [doi: 10.14722/ndss.2019.23415]
                 [21]  Tian SX, Yang GL, Cai Y. Detecting adversarial examples through image transformation. In: Proc. of the 32nd AAAI Conf. on Artificial
                     Intelligence. New Orleans: AAAI Press, 2018. 4139–4146. [doi: 10.1609/aaai.v32i1.11828]
                 [22]  Ryu G, Choi D. Detection of adversarial attacks based on differences in image entropy. Int’l Journal of Information Security, 2024, 23(1):
                     299–314. [doi: 10.1007/s10207-023-00735-6]
                 [23]  Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv:1412.6572, 2015.
                 [24]  Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world. arXiv:1607.02533, 2017.
                 [25]  Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083,
                     2019.
                 [26]  Moosavi-Dezfooli SM, Fawzi A, Frossard P. DeepFool: A simple and accurate method to fool deep neural networks. In: Proc. of the 2016
                     IEEE Conf. on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 2574–2582. [doi: 10.1109/CVPR.2016.282]
                 [27]  Carlini  N,  Wagner  D.  Adversarial  examples  are  not  easily  detected:  Bypassing  ten  detection  methods.  In:  Proc.  of  the  10th  ACM
                     Workshop on Artificial Intelligence and Security. Dallas: ACM, 2017. 3–14. [doi: 10.1145/3128572.3140444]
                 [28]  Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2015.
                 [29]  Huang G, Liu Z, Van der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proc. of the 2017 IEEE Conf. on
                     Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 2261–2269. [doi: 10.1109/CVPR.2017.243]
                 [30]  Liu Z, Mao HZ, Wu CY, Feichtenhofer C, Darrell T, Xie SN. A ConvNet for the 2020s. In: Proc. of the 2022 IEEE/CVF Conf. on
                     Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022. 11966–11976. [doi: 10.1109/CVPR52688.2022.01167]
                 [31]  Xu WL, Evans D, Qi YJ. Feature squeezing: Detecting adversarial examples in deep neural networks. In: Proc. of the 2018 Network and
                     Distributed System Security Symp. San Diego, 2018. 1–15. [doi: 10.14722/ndss.2018.23198]
                 [32]  Cui JQ, Tian ZT, Zhong ZS, Qi XJ, Yu B, Zhang HW. Decoupled Kullback-Leibler divergence loss. arXiv:2305.13948, 2024.
   476   477   478   479   480   481   482   483   484   485   486