Page 418 - 《软件学报》2025年第12期
P. 418
李洁 等: 历史交通数据驱动的 VANET 智能路由算法 5799
networks. IEEE Communications Magazine, 2017, 55(7): 180–185. [doi: 10.1109/MCOM.2017.1600348]
[18] Zhang WZ, Jiang LL, Song X, Shao ZY. Weight-based PA-GPSR protocol improvement method in VANET. Sensors, 2023, 23(13):
5991. [doi: 10.3390/s23135991]
[19] Wang ZL, Yao HP, Mai TL, Xiong ZH, Wu XH, Wu D, Guo S. Learning to routing in UAV swarm network: A multi-agent
reinforcement learning approach. IEEE Trans. on Vehicular Technology, 2023, 72(5): 6611–6624. [doi: 10.1109/TVT.2022.3232815]
[20] Kumar M, Raw RS. RC-LAHR: Road-side-unit-assisted cloud-based location-aware hybrid routing for software-defined vehicular ad hoc
networks. Sensors, 2024, 24(4): 1045. [doi: 10.3390/s24041045]
[21] Nguyen KV, Nguyen PL, Vu QH, Do TV. An energy efficient and load balanced distributed routing scheme for wireless sensor networks
with holes. Journal of Systems and Software, 2017, 123: 92–105. [doi: 10.1016/j.jss.2016.10.004]
[22] Harayama M, Mishioka M. Link quality-aware geographic predictive routing for V2V network based on GPSR. In: Proc. of the 2023 Int’l
Technical Conf. on Circuits/Systems, Computers, and Communications (ITC-CSCC). Jeju: IEEE, 2023. 1–6. [doi: 10.1109/ITC-CSCC
58803.2023.10212525]
[23] Elgaroui L, Pierre S, Chamberland S. New routing protocol for reliability to intelligent transportation communication. IEEE Trans. on
Mobile Computing, 2023, 22(4): 2281–2294. [doi: 10.1109/TMC.2021.3116157]
[24] Liu BY, Sheng Y, Shao X, Ji YS, Han WZ, Wang ES, Xiong SW. Collaborative intelligence enabled routing in green IoV: A grid and
vehicle density prediction-based protocol. IEEE Trans. on Green Communications and Networking, 2023, 7(2): 1012–1022. [doi: 10.1109/
TGCN.2022.3188026]
[25] Cheng JJ, Yuan GY, Zhou MC, Gao SC, Huang ZH, Liu C. A connectivity-prediction-based dynamic clustering model for VANET in an
urban scene. IEEE Internet of Things Journal, 2020, 7(9): 8410–8418. [doi: 10.1109/JIOT.2020.2990935]
[26] Chen KS, Li HK, Ruan YL, Wang SH. Improved AODV routing protocol based on local neighbor nodes and link weights. Ruan Jian Xue
Bao/Journal of Software, 2021, 32(4): 1186–1200 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5970.htm [doi: 10.
13328/j.cnki.jos.005970]
[27] Yang Q, Yoo SJ. Hierarchical reinforcement learning-based routing algorithm with grouped RSU in urban VANETs. IEEE Trans. on
Intelligent Transportation Systems, 2024, 25(8): 10131–10146. [doi: 10.1109/TITS.2024.3353258]
[28] Casas-Velasco DM, Rendon OMC, da Fonseca NLS. DRSIR: A deep reinforcement learning approach for routing in software-defined
networking. IEEE Trans. on Network and Service Management, 2022, 19(4): 4807–4820. [doi: 10.1109/TNSM.2021.3132491]
[29] Liu JR, Zhu H, Zhang GA. Geographic location routing protocol based on Q-learning in Internet of vehicles. Telecommunication
Engineering, 2023, 63(10): 1472–1478 (in Chinese with English abstract). [doi: 10.20079/j.issn.1001-893x.220514002]
[30] Li F, Song XY, Chen HJ, Li X, Wang Y. Hierarchical routing for vehicular ad hoc networks via reinforcement learning. IEEE Trans. on
Vehicular Technology, 2019, 68(2): 1852–1865. [doi: 10.1109/TVT.2018.2887282]
[31] Wu CLMG, Liu Z, Liu FQ, Yoshinaga T, Ji YS, Li J. Collaborative learning of communication routes in edge-enabled multi-access
vehicular environment. IEEE Trans. on Cognitive Communications and Networking, 2020, 6(4): 1155–1165. [doi: 10.1109/TCCN.2020.
3002253]
[32] Luo L, Sheng L, Yu HF, Sun G. Intersection-based V2X routing via reinforcement learning in vehicular ad hoc networks. IEEE Trans. on
Intelligent Transportation Systems, 2022, 23(6): 5446–5459. [doi: 10.1109/TITS.2021.3053958]
[33] Shan AXD, Fan XM, Chen XF, Ji YS, Wu CLMG. A reinforcement learning-based incentive scheme for multi-hop communications in
vehicular networks. IEEE Trans. on Cognitive Communications and Networking, 2024, 10(1): 335–347. [doi: 10.1109/TCCN.2023.
3316644]
[34] Jiang Y, Zhu JL, Yang KX. Environment-aware adaptive reinforcement learning-based routing for vehicular ad hoc networks. Sensors,
2023, 24(1): 40. [doi: 10.3390/s24010040]
[35] Zhu RX, Jiang QH, Huang XD, Li DS, Yang QL. A reinforcement-learning-based opportunistic routing protocol for energy-efficient and
void-avoided UASNs. IEEE Sensors Journal, 2022, 22(13): 13589–13601. [doi: 10.1109/JSEN.2022.3175994]
[36] Wang C, Shen XH, Wang HY, Zhang HW, Mei HD. Reinforcement learning-based opportunistic routing protocol using depth
information for energy-efficient underwater wireless sensor networks. IEEE Sensors Journal, 2023, 23(15): 17771–17783. [doi: 10.1109/
JSEN.2023.3285751]
[37] Qiang M, Chen XJ, Yin XY, Jia RZ, Xu D, Tang ZY, Fang DY. Data transmission for joint optimization of flow scheduling and path
selection in VANETs. Ruan Jian Xue Bao/Journal of Software, 2019, 30(2): 346–361 (in Chinese with English abstract). http://www.jos.
org.cn/1000-9825/5384.htm [doi: 10.13328/j.cnki.jos.005384]
[38] Abbasi HI, Voicu RC, Copeland JA, Chang YS. Cooperative BSM architecture to improve transportation safety in VANETs. In: Proc. of
the 13th Int’l Wireless Communications and Mobile Computing Conf. (IWCMC). Valencia: IEEE, 2017. 1016–1022. [doi: 10.1109/

