Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Computation Offloading and Resource Allocation in MEC-Enabled Integrated Aerial-Terrestrial Vehicular Networks: A Reinforcement Learning Approach
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.ORCID iD: 0000-0003-3717-7793
Show others and affiliations
2022 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 23, no 11, p. 21478-21491Article in journal (Refereed) Published
Abstract [en]

As important services of the future sixth-generation (6G) wireless networks, vehicular communication and mobile edge computing (MEC) have received considerable interest in recent years for their significant potential applications in intelligent transportation systems. However, MEC-enabled vehicular networks depend heavily on network access and communication infrastructure, often unavailable in remote areas, making computation offloading susceptible to breaking down. To address this issue, we propose an MEC-enabled vehicular network assisted through aerial-terrestrial connectivity to provide network access and high data-rate entertainment services to a vehicular network. We present a time-varying, dynamic system model where high altitude platforms (HAPs) equipped with MEC servers, connected to a backhaul system of low-earth orbit (LEO) satellites, are used to provide computation offloading capability to the vehicles, as well as to provide network access for vehicle-to-vehicle (V2V) communications. Our main objective is to minimize the total computation and communication overhead of the joint computation offloading and resource allocation strategies for the system of vehicles. Since our formulated optimization problem is a mixed-integer non-linear programming (MINLP) problem, which is NP-hard, we propose a decentralized value-iteration-based reinforcement learning (RL) approach as a solution. In our Q-learning-assisted analysis, each vehicle acts as an intelligent agent to form optimal strategies for offloading and resource allocation. We further extend our solution to deep Q-learning (DQL) and double deep Q-learning to overcome the issues of dimensionality and the over-estimation of the value functions, as in Q-learning. Simulation results prove the effectiveness of our solution in successfully reducing system costs compared to baseline schemes. 

Place, publisher, year, edition, pages
2022. Vol. 23, no 11, p. 21478-21491
Keywords [en]
Autonomous aerial vehicles, Dynamic scheduling, integrated aerial-terrestrial networks, mobile edge computing (MEC), multi-agent reinforcement learning, Q-learning, Resource management, Sixth generation (6G), Task analysis, Time-varying systems, Vehicle dynamics, vehicular communication
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:miun:diva-45733DOI: 10.1109/TITS.2022.3179987ISI: 000880752900119Scopus ID: 2-s2.0-85132757893OAI: oai:DiVA.org:miun-45733DiVA, id: diva2:1685136
Available from: 2022-08-01 Created: 2022-08-01 Last updated: 2022-12-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Mahmood, AamirGidlund, Mikael

Search in DiVA

By author/editor
Mahmood, AamirGidlund, Mikael
By organisation
Department of Information Systems and Technology
In the same journal
IEEE transactions on intelligent transportation systems (Print)
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 146 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf