Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep Reinforcement Learning Approach
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology. Kyung Hee University, Yongin 17104, South Korea.
Show others and affiliations
2021 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 22, no 9, p. 5994-6006Article in journal (Refereed) Published
Abstract [en]

In this paper, we design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed to improve the data freshness and connectivity to the Internet of Things (IoT) devices. First, we formulate an energy-efficient trajectory optimization problem in which the objective is to maximize the energy efficiency by optimizing the UAV-BS trajectory policy. We also incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS. Second, we propose an agile deep reinforcement learning with experience replay model to solve the formulated problem concerning the contextual constraints for the UAV-BS navigation. Moreover, the proposed approach is well-suited for solving the problem, since the state space of the problem is extremely large and finding the best trajectory policy with useful contextual features is too complex for the UAV-BSs. By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time. Finally, the simulation results illustrate the proposed approach is 3.6 % and 3.13 % more energy efficient than those of the greedy and baseline deep Q Network (DQN) approaches. 

Place, publisher, year, edition, pages
2021. Vol. 22, no 9, p. 5994-6006
Keywords [en]
age of information, deep reinforcement learning, trajectory optimization., Unmanned aerial vehicle
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:miun:diva-40831DOI: 10.1109/TITS.2020.3039617ISI: 000692209100049Scopus ID: 2-s2.0-85097958942OAI: oai:DiVA.org:miun-40831DiVA, id: diva2:1513018
Available from: 2020-12-29 Created: 2020-12-29 Last updated: 2021-09-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Abedin, Sarder

Search in DiVA

By author/editor
Abedin, Sarder
By organisation
Department of Information Systems and Technology
In the same journal
IEEE transactions on intelligent transportation systems (Print)
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 402 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf