Page 185 - 《软件学报》2021年第8期
P. 185

邹敏辉  等:基于木马的方式增强 RRAM 计算系统的安全性                                                  2467


                 9/10000.因此可以说,我们所提出的木马,在 RRAM 计算系统中的硬件开销非常小.

                 5    总   结
                    由于芯片产业的设计与制造相分离,RRAM 计算系统系统芯片可能会被过度生产.未授权的 RRAM 计算系
                 统损害了芯片设计者的利益,并且容易被攻击者通过黑盒攻击的方法提取出存储在其中的神经网络模型,而神
                 经网络模型的泄漏和滥用可能会造成更严重的危害.针对此种威胁,本文提出了一种基于神经元级别木马的方
                 法来防止未授权的 RRAM 计算系统被正常使用.当用户输入正确密钥时,嵌入在 RRAM 计算系统中的木马极难
                 被误激活,从而保证了授权的 RRAM 计算系统的正常运行;当用户输入错误的密码时,嵌入在 RRAM 计算系统
                 中的密钥极容易被激活,从而保证了未授权的 RRAM 计算系统不能够正常运行.在 RRAM 计算系统中嵌入神经
                 元级别木马不需要重新训练整个神经网络,而只需要训练极少数的参数,因此,我们的方法的效率很高.最后,我
                 们在实际的深度神经网络模型 LeNet、AlexNet 和 VGG16 中进行了实验,实验结果验证了所提出方法的有效性,
                 并且显示所提出的方法的硬件开销很低.在未来的工作中,我们将考虑在神经网络的 Conv 层插入我们所提出的
                 木马.


                 References:
                 [1]    Jiang W, Pan X, Jiang K, Wen L, Dong Q, Energy-Aware design of stochastic applications with statistical deadline and reliability
                     guarantees. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 2019,38(8):1413−1426.
                 [2]    Jiang W, Paul P, Jiang K. Design optimization for security- and safety-critical distributed real-time applications. Microprocessors
                     and Microsystems, 2017,52:401−415.
                 [3]    Guo XJ, Wang SD. Overview of edge intelligent computing-in-memory chips. Micro/nano Electronics and Intelligent
                     Manufacturing, 2019,1(2):72−82 (in Chinese with English abstract).
                 [4]    Cui XT, Zou MH, Wu KJ. Identifying inactive nets in function mode of circuits. Journal of Computer Research and Development,
                     2017,54(1):163−171 (in Chinese with English abstract).
                 [5]    Papernot N, Patrick M, Ian G, Somesh J, Berkay C, Ananthram S. Practical black-box attacks against machine learning. In: Proc. of
                     the ASIA CCS. 2017. 506−519.
                 [6]    Luo B, Liu YN, Wei LX, Xu Q. Towards imperceptible and robust adversarial example attacks against neural networks. In: Proc. of
                     the AAAI. 2018.
                 [7]    Rajendran J, Zhang H, Zhang C, Rose G, Pino Y, Sinanoglu O, Karri R. Fault analysis-based logic encryption. IEEE Trans. on
                     Computers, 2013,64(2):410−424.
                 [8]    Rajendran J, Pino Y, Sinanoglu O, Karri R. Security analysis of logic obfuscation. In: Proc. of the DAC. 2012. 83−89.
                 [9]    Yu CX, Zhang XY, Liu D, Ciesielski M, Holcomb D. Incremental SAT-based reverse engineering of camouflaged logic circuits.
                     IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 2017,36(10):1647−1659.
                [10]    Yasin M, Sinanoglu O, Rajendran J. Testing the trustworthiness of IC testing: An oracle-less attack on IC camouflaging. IEEE
                     Trans. on Information Forensics and Security, 2017,12(11):2668−2682.
                [11]    Wang YJ, Chen  P, Hu  J, Li GF, Rajendran  J.  The cat and mouse  in  split manufacturing. IEEE Trans.  on Very  Large-scale
                     Integration (VLSI) Systems, 2018,26(5):805−817.
                [12]    Sengupta A,  Patnaik  S, Knechtel  J, Ashraf M, Garg  S,  Sinanoglu O. Rethinking  split manufacturing: An  information theoretic
                     approach with secure layout techniques. In: Proc. of the ICCAD. 2017. 329−336.
                [13]    Shayan M, Basu K, Karri R. Hardware Trojans inspired hardware IP watermarks. IEEE Design & Test, 2019,36(6):72−79.
                [14]    Liu YT, Xie Y, Srivastava A. Neural Trojans. In Proc. of the ICCD. 2017. 45−48.
                [15]    Kevin E, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao CW, Prakash A, Kohno T, Song D. Robust physical-world attacks on
                     deep learning visual classification. In: Proc. of the CVPR. 2018. 1625−1634.
                                        3
                [16]    Li W, Wang Y, Li H, Li X. p m: A PIM-based neural network model protection scheme for deep learning accelerator. In: Proc. of
                     the ASPDAC. 2019. 633−638.
   180   181   182   183   184   185   186   187   188   189   190