Page 433 - 《软件学报》2025年第4期
P. 433

邹慧琪 等: 基于图神经网络的复杂时空数据挖掘方法综述                                                     1839


                 [48]  Chang  C,  Peng  WC,  Chen  TF.  LLM4TS:  Two-stage  fine-tuning  for  time-series  forecasting  with  pre-trained  LLMs.  arXiv:
                      2308.08469V1, 2023.
                 [49]  Zhou T, Niu PS, Wang X, Sun L, Jin R. One fits all: Power general time series analysis by pretrained LM. In: Proc. of the 37th Conf. on
                      Neural Information Processing Systems. New Orleans: NeurIPS, 2024. 36.
                 [50]  Sun CX, Li HY, Li YL, Hong SD. TEST: Text prototype aligned embedding to activate LLM’s ability for time series. In: Proc. of the
                      12th Int’l Conf. on Learning Representations. Vienna: OpenReview.net, 2024.
                 [51]  Cao DF, Jia FR, Arik SÖ, Pfister T, Zheng YX, Ye W, Liu Y. TEMPO: Prompt-based generative pre-trained transformer for time series
                      forecasting. In: Proc. of the 12th Int’l Conf. on Learning Representations. Vienna: OpenReview.net, 2024.
                 [52]  Gruver N, Finzi M, Qiu SK, Wilson AG. Large language models are zero-shot time series forecasters. In: Proc. of the 37th Conf. on
                      Neural Information Processing Systems. New Orleans: NeurIPS, 2023. 36.
                 [53]  Xie QQ, Han WG, Lai YZ, Peng M, Huang JM. The wall street neophyte: A zero-shot analysis of ChatGPT over multimodal stock
                      movement prediction challenges. arXiv:2304.05351, 2023.
                 [54]  Yu  XL,  Chen  Z,  Ling  Y,  Dong  SJ,  Liu  ZY,  Lu  YB.  Temporal  data  meets  LLM —Explainable  financial  time  series  forecasting.
                      arXiv:2306.11025, 2023.
                 [55]  Zhang BY, Yang HY, Liu XY. Instruct-FinGPT: Financial sentiment analysis by instruction tuning of general-purpose large language

                      models. arXiv:2306.12659, 2023.
                 [56]  Lopez-Lira  A,  Tang  YH.  Can  ChatGPT  forecast  stock  price  movements?  Return  predictability  and  large  language  models.
                      arXiv:2304.07619, 2024.
                 [57]  Liu X, McDuff D, Kovacs G, Galatzer-Levy I, Sunshine J, Zhan JN, Poh MZ, Liao S, Di Achille P, Patel S. Large language models are
                      few-shot health learners. arXiv:2305.15525, 2023.
                 [58]  Li J, Liu C, Cheng SB, Arcucci R, Hong SD. Frozen language model helps ECG zero-shot learning. In: Proc. of the 2024 Medical
                      Imaging with Deep Learning. Nashville: PMLR, 2024. 402–415.
                 [59]  Xue H, Voutharoja BP, Salim FD. Leveraging language foundation models for human mobility forecasting. In: Proc. of the 30th Int’l
                      Conf. on Advances in Geographic Information Systems. Seattle: ACM, 2022. 90. [doi: 10.1145/3557915.356102]
                 [60]  Kaplan J, McCandlish S, Henighan T, Brown TB, Chess B, Child R, Gray S, Radford A, Wu J, Amodei D. Scaling laws for neural
                      language models. arXiv:2001.08361, 2020.
                 [61]  Zhang ZW, Li HY, Zhang ZY, Qin YJ, Wang X, Zhu WW. Graph meets LLMs: Towards large graph models. In: Proc. of the 2023
                      NeurIPS New Frontiers in Graph Learning Workshop. NeurIPS GLFrontiers, 2023. 1–12.
                 [62]  Chen ZK, Mao HT, Li H, Jin W, Wen HZ, Wei XC, Wang SQ, Yin DW, Fan WQ, Liu H, Tang JL. Exploring the potential of large
                      language models (LLMs) in learning on graphs. ACM SIGKDD Explorations Newsletter, 2024, 25(2): 42–61. [doi: 10.1145/3655103.
                      3655110]
                 [63]  He  XX,  Bresson  X,  Laurent  T,  Perold  A,  LeCun  Y,  Hooi  B.  Harnessing  explanations:  LLM-to-LM  interpreter  for  enhanced  text-
                      attributed graph representation learning. In: Proc. of the 12th Int’l Conf. on Learning Representations. Vienna: OpenReview.net, 2024.
                 [64]  Brown  TB,  Mann  B,  Ryder  N,  et  al.  Language  models  are  few-shot  learners.  In:  Proc.  of  the  34th  Conf.  on  Neural  Information
                      Processing Systems. Vancouver: NeurIPS, 2020. 1877–1901.
                 [65]  Sun TX, Shao YF, Qian H, Huang XJ, Qiu XP. Black-box tuning for language-model-as-a-service. In: Proc. of the 39th Int’l Conf. on
                      Machine Learning. Baltimore: PMLR, 2022. 20841–20855.
                 [66]  He PC, Liu XD, Gao JF, Chen WZ. Deberta: Decoding-enhanced bert with disentangled attention. In: Proc. of the 9th Int’l Conf. on
                      Learning Representations. OpenReview.net, 2021.
                 [67]  Ren XB, Wei W, Xia LH, Su LX, Cheng SQ, Wang JF, Yin DW, Huang C. Representation learning with large language models for
                      recommendation. In: Proc. of the 2024 ACM Web Conf. Singapore: ACM, 2024. 3464–3475. [doi: 10.1145/3589334.3645458]
                 [68]  Koren  Y,  Rendle  S,  Bell  R.  Advances  in  collaborative  filtering.  In:  Ricci  F,  Rokach  L,  Shapira  B,  eds.  Recommender  Systems
                      Handbook. New York: Springer, 2022. 91–142.
                 [69]  Tang JB, Yang YH, Wei W, Shi L, Su LX, Cheng SQ, Yin DW, Huang C. GraphGPT: Graph instruction tuning for large language
                      models. In: Proc. of the 47th Int’l ACM SIGIR Conf. on Research and Development in Information Retrieval. Washington: ACM, 2024.
                      491–500. [doi: 10.1145/3626772.3657775]
                 [70]  Yun  S,  Jeong  M,  Kim  R,  Kang  J,  Kim  HJ.  Graph  transformer  networks.  In:  Proc.  of  the  33rd  Int’l  Conf.  on  Neural  Information
                      Processing Systems. Vancouver: Curran Associates Inc., 2019. 1073.
                 [71]  Wei J, Wang XZ, Schuurmans D, Bosma M, Ichter B, Xia F, Chi EH, Le QV, Zhou D. Chain-of-thought prompting elicits reasoning in
                      large language models. In: Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates
   428   429   430   431   432   433   434   435   436   437   438