Page 186 - 《软件学报》2021年第11期
P. 186

软件学报 ISSN 1000-9825, CODEN RUXUEW                                       E-mail: jos@iscas.ac.cn
                 Journal of Software,2021,32(11):3512−3529 [doi: 10.13328/j.cnki.jos.006084]   http://www.jos.org.cn
                 ©中国科学院软件研究所版权所有.                                                         Tel: +86-10-62562563


                                                                                  ∗
                 一种基于进化策略和注意力机制的黑盒对抗攻击算法

                      1,2
                                       1
                               1
                 黄立峰 ,   庄文梓 ,   廖泳贤 ,   刘   宁  1,2
                 1
                 (中山大学  计算机学院(软件学院),广东  广州   510006)
                 2
                 (广东省信息安全技术重点实验室,广东  广州  510006)
                 通讯作者:  刘宁, E-mail: liuning2@mail.sysu.edu.cn

                 摘   要:  深度神经网络在许多计算机视觉任务中都取得了优异的结果,并在不同领域中得到了广泛应用.然而研
                 究发现,在面临对抗样本攻击时,深度神经网络表现得较为脆弱,严重威胁着各类系统的安全性.在现有的对抗样本
                 攻击中,由于黑盒攻击具有模型不可知性质和查询限制等约束,更接近实际的攻击场景.但现有的黑盒攻击方法存在
                 攻击效率较低与隐蔽性弱的缺陷,因此提出了一种基于进化策略的黑盒对抗攻击方法.该方法充分考虑了攻击过程
                 中梯度更新方向的分布关系,自适应学习较优的搜索路径,提升攻击的效率.在成功攻击的基础上,结合注意力机制,
                 基于类间激活热力图将扰动向量分组和压缩优化,减少在黑盒攻击过程中积累的冗余扰动,增强优化后的对抗样本
                 的不可感知性.通过与其他 4 种最新的黑盒对抗攻击方法(AutoZOOM、QL-attack、FD-attak、D-based attack)在 7
                 种深度神经网络上进行对比,验证了该方法的有效性与鲁棒性.
                 关键词:  对抗样本;黑盒攻击;进化策略;注意力机制;压缩优化
                 中图法分类号: TP18

                 中文引用格式:  黄立峰,庄文梓,廖泳贤,刘宁.一种基于进化策略和注意力机制的黑盒对抗攻击算法.软件学报,2021,32(11):
                 3512−3529. http://www.jos.org.cn/1000-9825/6084.htm
                 英文引用格式: Huang LF, Zhuang WZ, Liao YX, Liu N. Black-box adversarial attack method based on evolution strategy and
                 attention mechanism. Ruan Jian Xue Bao/Journal of Software, 2021,32(11):3512−3529 (in Chinese). http://www.jos.org.cn/1000-
                 9825/6084.htm

                 Black-box Adversarial Attack Method Based on Evolution Strategy and Attention Mechanism

                                                               1
                                               1
                              1,2
                 HUANG Li-Feng ,   ZHUANG Wen-Zi ,  LIAO Yong-Xian ,  LIU Ning 1,2
                 1 (School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China)
                 2 (Guangdong Key Laboratory of Information Security Technology, Guangzhou 510006, China)
                 Abstract:    Since deep neural networks (DNNs) have provided state-of-the-art results for different computer vision tasks, they are utilized
                 as the basic backbones to be employed in many domains. Nevertheless, DNNs have been demonstrated to be vulnerable to adversarial
                 attacks in  recent  researches, which will threaten  the  security  of  different DNN-based systems. Compared with white-box adversarial
                 attacks, black-box attacks are more similar to the realistic scenarios under the constraints like lacking knowledge of model and limited
                 queries. However, existing methods under black-box scenarios not only require a large amount of model queries, but also are perceptible
                 from human vision system. To address these issues, this study proposes a novel method based on evolution strategy, which improves the
                 attack performance by  considering the  inherent distribution of updated  gradient direction. It helps the proposed  method in sampling
                 effective solutions with higher probabilities as well as learning better searching paths. In order to make generated adversarial example less
                 perceptible and reduce the redundant perturbations after a successful attacking, the proposed method utilizes class activation mapping to
                 group  the perturbations  by  introducing  the attention mechanism, and  then compresses the noise  group  by  group while ensure  that  the

                   ∗  基金项目:  国家自然科学基金(61772567);  中央高校基本科研业务费专项资金(19lgjc11)
                     Foundation  item: National Natural Science  Foundation  of China  (61772567); Fundamental Research Funds for the  Central
                 Universities (19lgjc11)
                     收稿时间: 2019-09-29;  修改时间: 2020-01-30, 2020-04-02;  采用时间: 2020-05-09
   181   182   183   184   185   186   187   188   189   190   191