Page 103 - 《软件学报》2020年第11期
P. 103

段旭  等:基于代码属性图及注意力双向 LSTM 的漏洞挖掘方法                                                3419


                [12]    Yamaguchi F, Lindner F, Rieck K. Vulnerability extrapolation: Assisted discovery of vulnerabilities using machine learning. In:
                     Proc. of the 5th USENIX Conf. on Offensive Technologies. 2011. 118−127.
                [13]    Feng Q, Zhou R, Xu C, Cheng Y, Testa B, Yin H. Scalable graph-based bug search for firmware images. In: Proc. of the 2016
                     ACM SIGSAC Conf. on Computer and Communications Security. 2016. 480−491.
                [14]    Xu X, Liu C, Feng Q, Yin H, Song L, Song D. Neural network-based graph embedding for cross-platform binary code similarity
                     detection. In: Proc. of the 2017 ACM SIGSAC Conf. on Computer and Communications Security (CCS 2017). 2017. 363−376.
                [15]    Li Z, Zou  D,  Xu S, Ou X,  Jin H, Wang  S,  Deng Z, Zhong  Y.  VulDeePecker: A  deep  learning-based  system for  vulnerability
                     detection. In: Proc. of the 2018 Network and Distributed System Security Symp. 2018.
                [16]    Duan X, Wu J, Ji S, Rui Z, Luo T, Yang M, Wu Y. VulSniper: Focus your attention to shoot fine-grained vulnerabilities. In: Proc.
                     of the 28th Int’l Joint Conf. on Artificial Intelligence (IJCAI 2019). 2019. 4665−4671.
                [17]    Grieco G, Grinblat GL, Uzal L, Rawat S, Feist J, Mounier L. Toward large-scale vulnerability discovery using machine learning. In:
                     Proc. of the ACM Conf. on Data and Application Security and Privacy. 2016. 85−96.
                [18]    Kim J, Hubczenko D, Montague P. Towards attention based vulnerability discovery using source code representation. In: Proc. of
                     the Int’l Conf. on Artificial Neural Networks. 2019. 731−746.
                [19]    Russell R, Kim L, Hamilton L, Lazovich T, Harer J, Ozdemir O, Ellingwood P, Mcconley M. Automated vulnerability detection in
                     source  code using deep representation learning. In: Proc. of the 17th IEEE Int’l  Conf. on Machine Learning and Applications
                     (ICMLA). 2018. 757−762.
                [20]    Yu H, Lam W, Chen L, Li G, Xie T, Wang Q. Neural detection of semantic code clones via tree-based convolution. In: Proc. of the
                     27th Int’l Conf. on Program Comprehension. 2019. 70−80.
                [21]    Pham NH, Nguyen TT, Nguyen HA, Nguyen TN. Detection of recurring software vulnerabilities. In: Proc. of the Int’l Conf. on
                     Automated Software Engineering. 2010. 447−456.
                [22]    Li J, Ernst MD. CBCD: Cloned buggy code detector. In: Proc. of the Int’l Conf. on Software Engineering. 2012. 310−320.
                [23]    Chang RY, Podgurski A, Yang J. Discovering neglected conditions in software by mining dependence graphs. Trans. on Software
                     Engineering, 2008,34(5):579−596.
                [24]    Yamaguchi F, Golde N, Arp D, Rieck K. Modeling and discovering vulnerabilities with code property graphs. In: Proc. of the 2014
                     IEEE Symp. on Security and Privacy. 2014. 590−604.
                [25]    Chaudhari S, Polatkan G, Ramanath R, Mithal V. An attentive survey of attention models. arXiv preprint, arXiv: 1904.02874, 2019.
                [26]    Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. Attention is all you need. In: Proc. of
                     the Advances in Neural Information Processing Systems. 2017. 5998−6008.
                [27]    Xu K, Ba JL, Kiros R, Cho K, Courville A, Salakhutdinov R, Zemel RS, Bengio Y. Show, attend and tell: Neural image caption
                     generation with visual attention. In: Proc. of the 32nd Int’l Conf. on Machine Learning, Vol.37. 2015. 2048−2057.
                [28]    Xiao T, Xu Y, Yang K, Zhang J, Peng Y, Zhang Z. The application of two-level attention models in deep convolutional neural
                     network for fine-grained  image  classification. In: Proc. of the IEEE  Conf. on Computer Vision  and Pattern Recognition. 2015.
                     842−850.
                [29]    Jaderberg M,  Simonyan K, Zisserman A, Kavukcuoglu K.  Spatial  transformer  networks. In:  Proc.  of  the Advances  in Neural
                     Information Processing Systems. 2015. 2017−2025.
                [30]    Wang F, Jiang M, Qian C, Yang S, Li C, Zhang H, Wang X, Tang X. Residual attention network for image classification. In: Proc.
                     of the Computer Vision and Pattern Recognition. 2017. 6450−6458.
                [31]    Zhao B, Wu X, Feng J, Peng Q, Yan S. Diversified visual attention networks for fine-grained object classification. IEEE Trans. on
                     Multimedia, 2017,19(6):1245−1256.
                [32]    Mnih  V,  Heess N, Graves  A.  Recurrent  models of visual  attention. In: Proc. of the  Advances in  Neural Information Processing
                     Systems. 2014. 2204−2212.
                [33]    Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G. The graph neural network model. IEEE Trans. on Neural Networks,
                     2009,20(1): 61−80.
                [34]    Choi E, Bahadori MT, Song L, Stewart WF, Sun J. GRAM: Graph-based attention model for healthcare representation learning. In:
                     Proc. of the 23rd ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining. 2017. 787−795.
   98   99   100   101   102   103   104   105   106   107   108