Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multi-Agent Deep Reinforcement Learning For Real-World Traffic Signal Controls - A Case Study
Institute Industrial IT (inIT).ORCID iD: 0000-0003-0777-4319
Stanford University.
Institute Industrial IT (inIT).
Stanford University.
2022 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Increasing traffic congestion leads to significant costs, whereby poorly configured signaled intersections are a common bottleneck and root cause. Traditional traffic signal control (TSC) systems employ rule-based or heuristic methods to decide signal timings, while adaptive TSC solutions utilize a traffic-actuated control logic to increase their adaptability to real-time traffic changes. However, such systems are expensive to deploy and are often not flexible enough to adequately adapt to the volatility of today’s traffic dynamics. More recently, this problem became a frontier topic in the domain of deep reinforcement learning (DRL) and enabled the development of multi-agent DRL approaches that can operate in environments with several agents present, such as traffic systems with multiple signaled intersections. However, many of these proposed approaches were validated using artificial traffic grids. This paper presents a case study, where real-world traffic data from the town of Lemgo in Germany is used to create a realistic road model within VISSIM. A multi-agent DRL setup, comprising multiple independent deep Q-networks, is applied to the simulated traffic network. Traditional rule-based signal controls, modeled in LISA+ and currently employed in the real world at the studied intersections, are integrated into the traffic model and serve as a performance baseline. The performance evaluation indicates a significant reduction of traffic congestion when using the RL-based signal control policy over the conventional TSC approach with LISA+. Consequently, this paper reinforces the applicability of RL concepts in the domain of TSC engineering by employing a highly realistic traffic model.

Place, publisher, year, edition, pages
Perth, Australia: Institute of Electrical and Electronics Engineers (IEEE), 2022.
Keywords [en]
traffic signal control, deep reinforcement learning, vissim
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:miun:diva-46955DOI: 10.1109/INDIN51773.2022.9976109ISBN: 978-1-7281-7568-3 (print)OAI: oai:DiVA.org:miun-46955DiVA, id: diva2:1729039
Conference
20th International Conference on Industrial Informatics, INDIN
Available from: 2023-01-19 Created: 2023-01-19 Last updated: 2023-01-31Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Friesen, Maxim
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 18 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf