Page 382 - 《软件学报》2025年第7期
P. 382
高梦楠 等: 面向深度学习的后门攻击及防御研究综述 3303
2016 IEEE Winter Conf. on Applications of Computer Vision (WACV). Lake Placid: IEEE, 2016. 1–9. [doi: 10.1109/WACV.2016.
7477558]
[152] Guo YD, Zhang L, Hu YX, He XD, Gao JF. MS-Celeb-1M: A dataset and benchmark for large-scale face recognition. In: Proc. of the
14th European Conf. on Computer Vision (ECCV). Amsterdam: Springer, 2016. 87–102. [doi: 10.1007/978-3-319-46487-9_6]
[153] Huang GB, Ramesh M, Berg T, Learned-Miller E. Labeled faces in the wild: A database for studying face recognition in unconstrained
environments. In: Workshop on Faces in ‘Real-life’ Images: Detection, Alignment, and Recognition. 2008. https://inria.hal.science/inria-
00321923
[154] Wolf L, Hassner T, Maoz I. Face recognition in unconstrained videos with matched background similarity. In: Proc. of the 24th IEEE
Conference on Computer Vision and Pattern Recognition. Colorado Springs: IEEE, 2011. 529–534. [doi: 10.1109/CVPR.2011.5995566]
[155] CASIA dataset. 2024. http://biometrics.idealtest.org/#/
[156] Eidinger E, Enbar R, Hassner T. Age and gender estimation of unfiltered faces. IEEE Trans. on Information Forensics and Security,
2014, 9(12): 2170–2179. [doi: 10.1109/TIFS.2014.2359646]
[157] Zheng L, Shen LY, Tian L, Wang SJ, Wang JD, Tian Q. Scalable person re-identification: A benchmark. In: Proc. of the 2015 IEEE Int’l
Conf. on Computer Vision (ICCV). Santiago: IEEE, 2015. 1116–1124. [doi: 10.1109/ICCV.2015.133]
[158] Zheng ZD, Zheng L, Yang Y. Unlabeled samples generated by GAN improve the person re-identification baseline in vitro. In: Proc. of
the 2017 IEEE Int’l Conf. on Computer Vision (ICCV). Venice: IEEE, 2017. 3774–3782. [doi: 10.1109/ICCV.2017.405]
[159] Goodfellow IJ, Erhan D, Carrier PL, Courville A, Mirza M, Hamner B, Cukierski W, Tang YC, Thaler D, Lee DH, Zhou YB, Ramaiah
C, Feng FX, Li RF, Wang XJ, Athanasakis D, Shawe-Taylor J, Milakov M, Park J, Ionescu R, Popescu M, Grozea C, Bergstra J, Xie JJ,
Romaszko L, Xu B, Chuang Z, Bengio Y. Challenges in Representation Learning: A report on three machine learning contests. In: Proc.
of the 20th Int’l Conf. on Neural Information Processing (ICONIP). Daegu: ACL, 2013. 117–124. [doi: 10.1007/978-3-642-42051-1_16]
[160] Combalia M, Codella NCF, Rotemberg V, Helba B, Vilaplana V, Reiter O, Carrera C, Barreiro A, Halpern AC, Puig S, Malvehy J.
BCN20000: Dermoscopic lesions in the wild. arXiv:1908.02288, 2019.
[161] Heller N, Sathianathen N, Kalapara A, Walczak E, Moore K, Kaluzniak H, Rosenberg J, Blake P, Rengel Z, Oestreich M, Dean J,
Tradewell M, Shah A, Tejpaul R, Edgerton Z, Peterson M, Raza S, Regmi S, Papanikolopoulos N, Weight C. The KiTS19 challenge
data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv:1904.00445, 2020.
[162] Ali S, Zhou F, Daul C, Braden B, Bailey A, Realdon S, East J, Wagnières G, Loschenov V, Grisan E, Blondel W, Rittscher J.
Endoscopy artifact detection (EAD 2019) challenge dataset. arXiv:1905.03209, 2019.
[163] Johnson AEW, Pollard TJ, Greenbaum NR, Lungren MP, Deng CY, Peng YF, Lu ZY, Mark RG, Berkowitz SJ, Horng S. MIMIC-CXR-
JPG, a large publicly available database of labeled chest radiographs. arXiv:1901.07042, 2019.
[164] Rahman T, Khandakar A, Qiblawey Y, Tahir A, Kiranyaz S, Kashem SBA, Islam MT, Al Maadeed S, Zughaier SM, Khan MS,
Chowdhury MEH. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Computers
in Biology and Medicine, 2021, 132: 104319. [doi: 10.1016/j.compbiomed.2021.104319]
[165] Sharma P, Ding N, Goodman S, Soricut R. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image
captioning. In: Proc. of the 56th Annual Meeting of the Association for Computational Linguistics, Vol. 1 (Long Papers). Melbourne:
ACL, 2018. 2556–2565. [doi: 10.18653/v1/P18-1238]
[166] Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL. Microsoft COCO: Common objects in context. In:
Proc. of the 13th European Conf. on Computer Vision (ECCV). Zurich: Springer, 2014. 740–755. [doi: 10.1007/978-3-319-10602-1_48]
[167] Thomee B, Shamma DA, Friedland G, Elizalde B, Ni K, Poland D, Borth D, Li LJ. YFCC100M: The new data in multimedia research.
Communications of the ACM, 2016, 59(2): 64–73. [doi: 10.1145/2812802]
[168] Wu ZR, Song SR, Khosla A, Yu F, Zhang LG, Tang XO, Xiao JX. 3D ShapeNets: A deep representation for volumetric shapes. In:
Proc. of the 2015 IEEE Conf. on Computer Vision and Pattern Recognition. Boston: IEEE, 2015. 1912–1920. [doi: 10.1109/CVPR.2015.
7298801]
[169] Chang AX, Funkhouser T, Guibas L, Hanrahan P, Huang QX, Li ZM, Savarese S, Savva M, Song SR, Su H, Xiao JX, Yi L, Yu F.
ShapeNet: An information-rich 3D model repository. arXiv:1512.03012, 2015.
[170] Lian Z, Godil A, Bustos B, Daoudi M, Hermans J, Kawamura S, Kurita Y, Lavoue G, Nguyen HV, Ohbuchi R, Ohkita Y, Ohishi Y,
Porikli F, Reuter M, Sipiran I, Smeets D, Suetens P, Tabia H, Vandermeulen D. SHREC’11 track: Shape retrieval on non-rigid 3D
watertight meshes. In: Proc. of the Eurographics Workshop on 3D Object Retrieval. Llandudno: Eurographics Association, 2011. 79–88.
[doi: 10.2312/3DOR/3DOR11/079-088]
[171] Hanocka R, Hertz A, Fish N, Giryes R, Fleishman S, Cohen-Or D. MeshCNN: A network with an edge. ACM Trans. on Graphics
(TOG), 2019, 38(4): 90. [doi: 10.1145/3306346.3322959]
[172] Hu SM, Liu ZN, Guo MH, Cai JX, Huang JH, Mu TJ, Martin RR. Subdivision-based mesh convolution networks. ACM Trans. on

