Page 247 - 《软件学报》2025年第5期
P. 247
贺瑞芳 等: 基于去噪图自编码器的无监督社交媒体文本摘要 2147
474]
[3] Jia RP, Cao Y, Fang F, Zhou YC, Fang Z, Liu YB, Wang S. Deep differential amplifier for extractive summarization. In: Proc. of the 59th
Annual Meeting of the Association for Computational Linguistics and the 11th Int’l Joint Conf. on Natural Language Processing (Vol. 1:
Long Papers). Online: Association for Computational Linguistics, 2021. 366–376. [doi: 10.18653/v1/2021.acl-long.31]
[4] Pilault J, Li R, Subramanian S, Pal C. On extractive and abstractive neural document summarization with Transformer language models.
In: Proc. of the 2020 Conf. on Empirical Methods in Natural Language Processing. Online: Association for Computational Linguistics,
2020. 9308–9319. [doi: 10.18653/v1/2020.emnlp-main.748]
[5] Shi JX, Liang C, Hou L, Li JZ, Liu ZY, Zhang HW. DeepChannel: Salience estimation by contrastive learning for extractive document
summarization. In: Proc. of the 33rd AAAI Conf. on Artificial Intelligence. Honolulu: AAAI, 2019. 6999–7006. [doi: 10.1609/aaai.
v33i01.33016999]
[6] Liu YX, Liu PF. SimCLS: A simple framework for contrastive learning of abstractive summarization. In: Proc. of the 59th Annual
Meeting of the Association for Computational Linguistics and the 11th Int’l Joint Conf. on Natural Language Processing (Vol. 2: Short
Papers). Online: Association for Computational Linguistics, 2021. 1065–1072. [doi: 10.18653/v1/2021.acl-short.135]
[7] Liu YX, Liu PF, Radev D, Neubig G. BRIO: Bringing order to abstractive summarization. In: Proc. of the 60th Annual Meeting of the
Association for Computational Linguistics (Vol. 1: Long Papers). Dublin: Association for Computational Linguistics, 2022. 2890–2903.
[doi: 10.18653/v1/2022.acl-long.207]
[8] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional Transformers for language understanding. In: Proc.
of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1
(Long and Short Papers). Minneapolis: Association for Computational Linguistics, 2019. 4171–4186. [doi: 10.18653/v1/N19-1423]
[9] Lewis M, Liu YH, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L. BART: Denoising sequence-to-
sequence pre-training for natural language generation, translation, and comprehension. In: Proc. of the 58th Annual Meeting of the
Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020. 7871–7880.
[10] Liu Y, Lapata M. Text summarization with pretrained encoders. In: Proc. of the 2019 Conf. on Empirical Methods in Natural Language
Processing and the 9th Int’l Joint Conf. on Natural Language Processing. Hong Kong: Association for Computational Linguistics, 2019.
3730–3740. [doi: 10.18653/v1/D19-1387]
[11] Pietruszka M, Borchmann Ł, Garncarek Ł. Sparsifying transformer models with trainable representation pooling. In: Proc. of the 60th
Annual Meeting of the Association for Computational Linguistics (Vol. 1: Long Papers). Dublin: Association for Computational
Linguistics, 2022. 8616–8633. [doi: 10.18653/v1/2022.acl-long.590]
[12] Guo M, Ainslie J, Uthus D, Ontanon S, Ni J, Sung YH, Yang YF. LongT5: Efficient text-to-text Transformer for long sequences. In:
Findings of the Association for Computational Linguistics: NAACL 2022. Seattle: Association for Computational Linguistics, 2022.
724–736. [doi: 10.18653/v1/2022.findings-naacl.55]
[13] Rothe S, Narayan S, Severyn A. Leveraging pre-trained checkpoints for sequence generation tasks. Trans. of the Association for
Computational Linguistics. 2020, 8, 264–280. [doi: 10.1162/tacl_a_00313]
[14] Ge SY, Huang JX, Meng Y, Han JW. FineSum: Target-oriented, fine-grained opinion summarization. In: Proc. of the 16th ACM Int’l
Conf. on Web Search and Data Mining. Singapore: ACM, 2023. 1093–1101. [doi: 10.1145/3539597.3570397]
[15] Yoon S, Chan HP, Han JW. PDSum: Prototype-driven continuous summarization of evolving multi-document sets stream. In: Proc. of the
2023 ACM Web Conf. Austin: ACM, 2023. 1650–1661. [doi: 10.1145/3543507.3583371]
[16] Amplayo RK, Lapata M. Unsupervised opinion summarization with noising and denoising. In: Proc. of the 58th Annual Meeting of the
Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020. 1934–1945. [doi: 10.18653/v1/2020.
acl-main.175]
[17] Bražinskas A, Lapata M, Titov I. Unsupervised opinion summarization as copycat-review generation. In: Proc. of the 58th Annual
Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020. 5151–5169. [doi: 10.
18653/v1/2020.acl-main.461]
[18] Andy A, Wijaya DT, Callison-Burch C. Winter is here: Summarizing Twitter streams related to pre-scheduled events. In: Proc. of the 2nd
Workshop on Storytelling. Florence: Association for Computational Linguistics, 2019. 112–116. [doi: 10.18653/v1/W19-3412]
[19] Gillani M, Ilyas MU, Saleh S, Alowibdi JS, Aljohani N, Alotaibi FS. Post summarization of microblogs of sporting events. In: Proc. of
the 26th Int’l Conf. on World Wide Web Companion. Perth, 2017. 59–68. [doi: 10.1145/3041021.3054146]
[20] Zogan H, Razzak I, Jameel S, Xu GD. DepressionNet: A novel summarization boosted deep framework for depression detection on social
media. In: Proc. of the 44th Int’l ACM SIGIR Conf. on Research and Development in Information Retrieval. 2021. 133–142. [doi: 10.
1145/3404835.3462938]