Page 21 - 《软件学报》2024年第4期
P. 21
刘鑫 等: 基于多样真实任务生成的鲁棒小样本分类方法 1599
[6] Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks. In: Proc. of the Int’l Conf. on
Machine Learning. 2017. 1126−1135.
[7] Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning. In: Proc. of the 31st Int’l Conf. on Neural Information
Processing Systems. Cambridge: MIT, 2017. 4080−4090.
[8] Nichol A, Achiam J, Schulman J. On first-order meta-learning algorithms. arXiv:1803.02999, 2018.
[9] Rajeswaran A, Finn C, Kakade SM, et al. Meta-learning with implicit gradients. In: Advances in Neural Information Processing
Systems. 2019. 32.
[10] Lee Y, Choi S. Gradient-based meta-learning with learned layerwise metric and subspace. In: Proc. of the Int’l Conf. on Machine
Learning. 2018. 2927−2936.
[11] Li Z, Zhou F, Chen F, et al. Meta-SGD: Learning to learn quickly for few-shot learning. arXiv:1707.09835, 2017.
[12] Yoon J, Kim T, Dia O, et al. Bayesian model-agnostic meta-learning. In: Advances in Neural Information Processing Systems.
2018. 31.
[13] Ravi S, Beatson A. Amortized Bayesian meta-learning. In: Proc. of the Int’l Conf. on Learning Representations. 2019.
[14] Patacchiola M, Turner J, Crowley EJ, et al. Bayesian meta-learning for the few-shot setting via deep kernels. In: Advances in
Neural Information Processing Systems, Vol.33. 2020. 16108−16118.
[15] Zhang Q, Fang J, Meng Z, et al. Variational continual Bayesian meta-learning. In: Advances in Neural Information Processing
Systems, Vol.34. 2021. 24556−24568.
[16] Chen L, Chen T. Is Bayesian model-agnostic meta learning better than model-agnostic meta learning, provably? In: Proc. of the
Int’l Conf. on Artificial Intelligence and Statistics. 2022. 1733−1774.
[17] Hou R, Chang H, Ma B, et al. Cross attention network for few-shot classification. In: Advances in Neural Information Processing
Systems. 2019. 4003−4014.
[18] Xing C, Rostamzadeh N, Oreshkin B, et al. Adaptive cross-modal few-shot learning. In: Advances in Neural Information
Processing Systems. 2019. 4847−4857
[19] Bateni P, Goyal R, Masrani V, et al. Improved few-shot visual classification. In: Proc. of the IEEE/CVF Conf. on Computer Vision
and Pattern Recognition. 2020. 14481−14490.
[20] Zhang C, Cai Y, Lin G, et al. DeepEMD: Few-shot image classification with differentiable earth mover’s distance and structured
classifiers. In: Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition. 2020. 12203−12213.
[21] Sung F, Yang Y, Zhang L, et al. Learning to compare: Relation network for few-shot learning. In: Proc. of the IEEE Conf. on
Computer Vision and Pattern Recognition. 2018. 1199−1208.
[22] Killamsetty K, Li C, Zhao C, et al. A nested bi-level optimization framework for robust few shot learning. Proc. of the AAAI Conf.
on Artificial Intelligence, 2022, 36(7): 7176−7184.
[23] Cai D, Sheth R, Mackey L, et al. Weighted meta-learning. arXiv:2003.09465, 2020.
[24] Yao H, Huang LK, Zhang L, et al. Improving generalization in meta-learning via task augmentation. In: Proc. of the Int’l Conf. on
Machine Learning. 2021. 11887−11897.
[25] Yao H, Zhang L, Finn C. Meta-learning with fewer tasks through task interpolation. arXiv:2106.02695, 2021.
[26] Ni R, Goldblum M, Sharaf A, et al. Data augmentation for meta-learning. In: Proc. of the Int’l Conf. on Machine Learning. 2021.
8152−8161.
[27] Smola AJ, Gretton A, Borgwardt K. Maximum mean discrepancy. In: Proc. of the 13th Int’l Conf. 2006. 3−6.
[28] Zhang H, Cisse M, Dauphin YN, et al. Mixup: Beyond empirical risk minimization. In: Proc. of the Int’l Conf. on Learning
Representations. 2018.
[29] Verma V, Lamb A, Beckham C, et al. Manifold mixup: Better representations by interpolating hidden states. In: Proc. of the Int’l
Conf. on Machine Learning. 2019. 6438−6447.
[30] Yun S, Han D, Oh SJ, et al. Cutmix: Regularization strategy to train strong classifiers with localizable features. In: Proc. of the
IEEE/CVF Int’l Conf. on Computer Vision. 2019. 6023−6032.
[31] Kim JH, Choo W, Song HO. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In: Proc. of the Int’l Conf. on
Machine Learning. 2020. 5275−5285.