1 |
HU J, WANG Q, YANG K. Energy self-sustainability in full-spectrum 6G [J]. IEEE wireless communications, 2021, 28(1): 104–111. DOI: 10.1109/MWC.001.2000156
DOI
|
2 |
WU D P, HE J, WANG H G, et al. A hierarchical packet forwarding mechanism for energy harvesting wireless sensor networks [J]. IEEE communications magazine, 2015, 53(8): 92–98. DOI: 10.1109/MCOM.2015.7180514
DOI
|
3 |
GHARIBI M, BOUTABA R, WASLANDER S L. Internet of drones [J]. IEEE access, 2016, 4: 1148–1162. DOI: 10.1109/ACCESS.2016.2537208
DOI
|
4 |
ZHU S C, GUI L, CHENG N, et al. Joint design of access point selection and path planning for UAV-assisted cellular networks [J]. IEEE Internet of Things journal, 2020, 7(1): 220–233. DOI: 10.1109/JIOT.2019.2947718
DOI
|
5 |
GHORBEL M BEN, RODRÍGUEZ-DUARTE D, GHAZZAI H, et al. Joint position and travel path optimization for energy efficient wireless data gathering using unmanned aerial vehicles [J]. IEEE transactions on vehicular technology, 2019, 68(3): 2165–2175. DOI: 10.1109/TVT.2019.2893374
DOI
|
6 |
HU J, CAI X P, YANG K. Joint trajectory and scheduling design for UAV aided secure backscatter communications [J]. IEEE wireless communications letters, 2020, 9(12): 2168–2172. DOI: 10.1109/LWC.2020.3016174
DOI
|
7 |
XIE L F, XU J, ZHANG R. Throughput maximization for UAV-enabled wireless powered communication networks [J]. IEEE Internet of Things journal, 2019, 6(2): 1690–1703. DOI: 10.1109/JIOT.2018.2875446
DOI
|
8 |
MNIH V, KAVUKCUOGLU K, SILVER D, et al. Playing atari with deep reinforcement learning [EB/OL]. [2023-03-10].
|
9 |
LI K, NI W, TOVAR E, et al. On-board deep Q-network for UAV-assisted online power transfer and data collection [J]. IEEE transactions on vehicular technology, 2019, 68(12): 12215–12226. DOI: 10.1109/TVT.2019.2945037
DOI
|
10 |
TANG J, SONG J R, OU J H, et al. Minimum throughput maximization for multi-UAV enabled WPCN: A deep reinforcement learning method [J]. IEEE access, 2020, 8: 9124–9132. DOI: 10.1109/ACCESS.2020.2964042
DOI
|
11 |
LIU C H, CHEN Z Y, TANG J, et al. Energy-efficient UAV control for effective and fair communication coverage: a deep reinforcement learning approach [J]. IEEE journal on selected areas in communications, 2018, 36(9): 2059–2070. DOI: 10.1109/JSAC.2018.2864373
DOI
|
12 |
LU Y P, XIONG G, ZHANG X, et al. Uplink throughput maximization in UAV-aided mobile networks: a DQN-based trajectory planning method [J]. Drones, 2022, 6(12): 378. DOI: 10.3390/drones6120378
DOI
|
13 |
ABEDIN S F, MUNIR M S, TRAN N H, et al. Data freshness and energy-efficient UAV navigation optimization: a deep reinforcement learning approach [J]. IEEE transactions on intelligent transportation systems, 2021, 22(9): 5994–6006. DOI: 10.1109/TITS.2020.3039617
DOI
|
14 |
ZHANG T K, LEI J Y, LIU Y W, et al. Trajectory optimization for UAV emergency communication with limited user equipment energy: a safe-DQN approach [J]. IEEE transactions on green communications and networking, 2021, 5(3): 1236–1247. DOI: 10.1109/TGCN.2021.3068333
DOI
|
15 |
LI K, NI W, TOVAR E, et al. Joint flight cruise control and data collection in UAV-aided Internet of Things: an onboard deep reinforcement learning approach [J]. IEEE Internet of Things journal, 2021, 8(12): 9787–9799. DOI: 10.1109/JIOT.2020.3019186
DOI
|