Page 197 - 《软件学报》2024年第6期
P. 197

谢瑞麟 等: IATG: 基于解释分析的自动驾驶软件测试方法                                                  2773


                     GANs. In: Proc. of the 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 8798–8807.
                     [doi: 10.1109/CVPR.2018.00917]
                 [31]  Zhang R, Isola P, Efros AA, Shechtman E, Wang O. The unreasonable effectiveness of deep features as a perceptual metric. In: Proc. of
                     the 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 586–595. [doi: 10.1109/CVPR.
                     2018.00068]
                 [32]  Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: From error visibility to structural similarity. IEEE Trans. on
                     Image Processing, 2004, 13(4): 600–612. [doi: 10.1109/TIP.2003.819861]
                 [33]  Wang Z, Simoncelli EP, Bovik AC. Multiscale structural similarity for image quality assessment. In: Proc. of the 37th Asilomar Conf. on
                     Signals, Systems & Computers. Pacific Grove: IEEE, 2003. 1398–1402. [doi: 10.1109/ACSSC.2003.1292216]
                 [34]  Zhang L, Zhang L, Mou XQ, Zhang D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. on Image Processing,
                     2011, 20(8): 2378–2386. [doi: 10.1109/TIP.2011.2109730]
                 [35]  Yuan SL, Guo Z. A warning model for vehicle collision on account of the reaction time of the driver. Journal of Safety and Environment,
                     2021, 21(1): 270–276 (in  Chinese  with  English  abstract). [doi: 10.13637/j.issn.1009-6094.2019.0830]
                 [36]  Li Z, Pan MX, Zhang T, Li XD. Testing DNN-based autonomous driving systems under critical environmental conditions. In: Proc. of the
                     38th Int’l Conf. on Machine Learning. PMLR, 2021. 6471–6482.

                 [37]  Ding KY, Ma KD, Wang SQ, Simoncelli EP. Comparison of full-reference image quality models for optimization of image processing
                     systems. Int’l Journal of Computer Vision, 2021, 129(4): 1258–1281. [doi: 10.1007/s11263-020-01419-7]
                 [38]  Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. In: Proc. of the 2019 IEEE/CVF Conf.
                     on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 4396–4405. [doi: 10.1109/CVPR.2019.00453]
                 [39]  Chan C, Ginosar S, Zhou TH, Efros A. Everybody dance now. In: Proc. of the 2019 IEEE/CVF Int’l Conf. on Computer Vision. Seoul:
                     IEEE, 2019: 5932–5941. [doi: 10.1109/ICCV.2019.00603]
                 [40]  Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T. Analyzing and improving the image quality of StyleGAN. In: Proc. of the
                     2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 8107–8116. [doi: 10.1109/CVPR42600.2020.
                     00813]
                 [41]  Pei KX, Cao YZ, Yang JF, Jana S. DeepXplore: Automated whitebox testing of deep learning systems. In: Proc. of the 26th Symp. on
                     Operating Systems Principles. Shanghai: ACM, 2017. 1–18. [doi: 10.1145/3132747.3132785]
                 [42]  Borkar TS, Karam LJ. DeepCorrect: Correcting DNN models against image distortions. IEEE Trans. on Image Processing, 2019, 28(12):
                     6022–6034. [doi: 10.1109/TIP.2019.2924172]
                 [43]  Xie XF, Ma L, Juefei-Xu F, Xue MH, Chen HX, Liu Y, Zhao JJ, Li B, Yin JX, See S. DeepHunter: A coverage-guided fuzz testing
                     framework for deep neural networks. In: Proc. of the 28th ACM SIGSOFT Int’l Symp. on Software Testing and Analysis. Beijing: ACM,
                     2019. 146–157. [doi: 10.1145/3293882.3330579]
                 [44]  Wang  S,  Su  ZD.  Metamorphic  object  insertion  for  testing  object  detection  systems.  In:  Proc.  of  the  35th  IEEE/ACM  Int ’l  Conf.  on
                     Automated Software Engineering. Melbourne: IEEE, 2020. 1053–1065.
                 [45]  Kong ZL, Guo JF, Li A, Liu C. PhysGAN: Generating physical-world-resilient adversarial examples for autonomous driving. In: Proc. of
                     the 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 14242–14251. [doi: 10.1109/CVPR42600.
                     2020.01426]

                 附中文参考文献:
                  [1]  余凯, 贾磊, 陈雨强, 徐伟. 深度学习的昨天、今天和明天. 计算机研究与发展, 2013, 50(9): 1799–1804. [doi: 10.7544/issn1000-1239.
                     2013.20131180]
                 [10]  朱向雷, 王海弛, 尤翰墨, 张蔚珩, 张颖异, 刘爽, 陈俊洁, 王赞, 李克秋. 自动驾驶智能系统测试研究综述. 软件学报, 2021, 32(7):
                     2056–2077. http://www.jos.org.cn/1000-9825/6266.htm [doi: 10.13328/j.cnki.jos.006266]
                 [23]  纪守领, 李进锋, 杜天宇, 李博. 机器学习模型可解释性方法、应用与安全研究综述. 计算机研究与发展, 2019, 56(10): 2071–2096.
                     [doi: 10.7544/issn1000-1239.2019.20190540]
                 [35]  袁守利, 郭铮. 考虑驾驶员反应时间的车辆碰撞预警模型. 安全与环境学报, 2021, 21(1): 270–276. [doi: 10.13637/j.issn.1009-6094.
                     2019.0830]
   192   193   194   195   196   197   198   199   200   201   202