Page 37 - 《软件学报》2025年第8期
P. 37
3460 软件学报 2025 年第 36 卷第 8 期
Narkawicz A, eds. NASA Formal Methods. Cham: Springer Int’l Publishing, 2018. 121–138. [doi: 10.1007/978-3-319-77935-5_9]
[40] Ruan WJ, Wu M, Sun YC, Huang XW, Kroening D, Kwiatkowska M. Global robustness evaluation of deep neural networks with
provable guarantees for the hamming distance. In: Proc. of the 28th Int’l Joint Conf. on Artificial Intelligence. Macao: IJCAI, 2019.
5944–5952.
[41] Weng TW, Zhang H, Chen HG, Song Z, Hsieh CJ, Daniel L, Boning DS, Dhillon IS. Towards fast computation of certified robustness for
ReLU networks. In: Proc. of the 35th Int’l Conf. on Machine Learning. Stockholm, 2018. 5273–5282.
[42] Wicker M, Huang XW, Kwiatkowska M. Feature-guided black-box safety testing of deep neural networks. In: Beyer D, Huisman M, eds.
Tools and Algorithms for the Construction and Analysis of Systems. Cham: Springer, 2018. 408–426. [doi: 10.1007/978-3-319-89960-2_22]
[43] Wu M, Wicker M, Ruan WJ, Huang XW, Kwiatkowska M. A game-based approximate verification of deep neural networks with
provable guarantees. Theoretical Computer Science, 2020, 807: 298–329. [doi: 10.1016/j.tcs.2019.05.046]
[44] Tran HD, Lopez DM, Musau P, Yang XD, Nguyen LV, Xiang WM, Johnson TT. Star-based reachability analysis of deep neural
networks. In: Ter Beek MH, McIver A, Oliveira JN, eds. Formal Methods—The Next 30 Years. Cham: Springer, 2019. 670–686. [doi: 10.
1007/978-3-030-30942-8_39]
[45] Tran HD, Bak S, Xiang WM, Johnson TT. Verification of deep convolutional neural networks using imagestars. In: Lahiri SK, Wang C,
eds. Computer Aided Verification. Cham: Springer, 2020. 18–42. [doi: 10.1007/978-3-030-53288-8_2]
[46] Yang PF, Li JL, Liu JC, Huang CC, Li RJ, Chen LQ, Huang XW, Zhang LJ. Enhancing robustness verification for deep neural networks
via symbolic propagation. Formal Aspects of Computing, 2021, 33(3): 407–435. [doi: 10.1007/s00165-021-00548-1]
[47] Li RJ, Yang PF, Huang CC, Sun YC, Xue B, Zhang LJ. Towards practical robustness analysis for dnns based on PAC-model learning. In:
Proc. of the 44th Int’l Conf. on Software Engineering. Pittsburgh: ACM, 2022. 2189–2201. [doi: 10.1145/3510003.3510143]
[48] Baluta T, Chua ZL, Meel KS, Saxena P. Scalable quantitative verification for deep neural networks. In: Proc. of the 43rd IEEE/ACM Int’l
Conf. on Software Engineering. Madrid: IEEE, 2021. 312–323. [doi: 10.1109/ICSE43902.2021.00039]
[49] Cardelli L, Kwiatkowska M, Laurenti L, Paoletti N, Patane A, Wicker M. Statistical guarantees for the robustness of Bayesian neural
networks. In: Proc. of the 28th Int’l Joint Conf. on Artificial Intelligence. Macao: IJCAI, 2019. 5693–5700.
[50] Mangal R, Nori AV, Orso A. Robustness of neural networks: A probabilistic and practical approach. In: Proc. of the 41st IEEE/ACM Int’l
Conf. on Software Engineering: New Ideas and Emerging Results (ICSE-NIER). Montreal: IEEE, 2019. 93–96. [doi: 10.1109/ICSE-
NIER.2019.00032]
[51] Ugare S, Singh G, Misailovic S. Proof transfer for fast certification of multiple approximate neural networks. Proc. of the ACM on
Programming Languages, 2022, 6(OOPSLA1): 75. [doi: 10.1145/3527319]
[52] Ding F, Denain JS, Steinhardt J. Grounding representation similarity with statistical testing. In: Proc. of the 35th Int’l Conf. on Neural
Information Processing Systems. Red Hook: Curran Associates Inc., 2024. 120.
[53] Hamilton WL, Leskovec J, Jurafsky D. Cultural shift or linguistic drift? Comparing two computational measures of semantic change. In:
Proc. of the 2016 Conf. on Empirical Methods in Natural Language Processing. Austin: Association for Computational Linguistics, 2016.
2116–2121. [doi: 10.18653/v1/D16-1229]
[54] Shahbazi M, Shirali A, Aghajan H, Nili H. Using distance on the riemannian manifold to compare representations in brain and in models.
NeuroImage, 2021, 239: 118271. [doi: 10.1016/j.neuroimage.2021.118271]
[55] Madani O, Pennock DM, Flake GW. Co-validation: Using model disagreement on unlabeled data to validate classification algorithms. In:
Proc. of the 18th Int’l Conf. on Neural Information Processing Systems. Vancouver: MIT Press, 2004. 873–880.
[56] Hsu H, Calmon FP. Rashomon capacity: A metric for predictive multiplicity in classification. In: Proc. of the 36th Int’l Conf. on Neural
Information Processing Systems. New Orleans: Curran Associates Inc., 2022. 2101.
[57] Li YC, Zhang ZQ, Liu BY, Yang ZY, Liu YX. ModelDiff: Testing-based DNN similarity comparison for model reuse detection. In: Proc.
of the 30th ACM SIGSOFT Int’l Symp. on Software Testing and Analysis. New York: ACM, 2021. 139–151. [doi: 10.1145/3460319.
3464816]
[58] Paulsen B, Wang JB, Wang JW, Wang C. NEURODIFF: Scalable differential verification of neural networks using fine-grained
approximation. In: Proc. of the 35th IEEE/ACM Int’l Conf. on Automated Software Engineering. Melbourne: IEEE, 2020. 784–796.
[59] Paulsen B, Wang JB, Wang C. ReLUDiff: Differential verification of deep neural networks. In: Proc. of the 42nd IEEE/ACM Int’l Conf.
on Software Engineering. Seoul: IEEE, 2020. 714–726.
[60] Mohammadinejad S, Paulsen B, Deshmukh JV, Wang C. DiffRNN: Differential verification of recurrent neural networks. In: Proc. of the
19th Int’l Conf. on Formal Modeling and Analysis of Timed Systems. Paris: Springer, 2021. 117–134. [doi: 10.1007/978-3-030-85037-1_8]

