Mid Sweden University

miun.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Experimental Analysis of Trustworthy In-Vehicle Intrusion Detection System using eXplainable Artificial Intelligence (XAI)
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
RISE.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
Show others and affiliations
2022 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 102831-102841Article in journal (Refereed) Published
Abstract [en]

Anomaly-based In-Vehicle Intrusion Detection System (IV-IDS) is one of the protection mechanisms to detect cyber attacks on automotive vehicles. Using artificial intelligence (AI) for anomaly detection to thwart cyber attacks is promising but suffers from generating false alarms and making decisions that are hard to interpret. Consequently, this issue leads to uncertainty and distrust towards such IDS design unless it can explain its behavior, e.g., by using eXplainable AI (XAI). In this paper, we consider the XAI-powered design of such an IV-IDS using CAN bus data from a public dataset, named “Survival”. Novel features are engineered, and a Deep Neural Network (DNN) is trained over the dataset. A visualization-based explanation, “VisExp”, is created to explain the behavior of the AI-based IV-IDS, which is evaluated by experts in a survey, in relation to a rule-based explanation. Our results show that experts’ trust in the AI-based IV-IDS is significantly increased when they are provided with VisExp (more so than the rule-based explanation). These findings confirm the effect, and by extension the need, of explainability in automated systems, and VisExp, being a source of increased explainability, shows promise in helping involved parties gain trust in such systems. Author

Place, publisher, year, edition, pages
2022. Vol. 10, p. 102831-102841
Keywords [en]
Artificial intelligence, Automotive, Automotive engineering, Behavioral sciences, Deep Learning, Intrusion detection, Intrusion Detection System, Machine Learning, Random forests, Trust management, Trustworthiness, XAI
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:miun:diva-46301DOI: 10.1109/ACCESS.2022.3208573ISI: 000864338300001Scopus ID: 2-s2.0-85139441364OAI: oai:DiVA.org:miun-46301DiVA, id: diva2:1704801
Available from: 2022-10-19 Created: 2022-10-19 Last updated: 2022-10-20Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Fakhrul Abedin, SarderThar, KyiMahmood, AamirGidlund, Mikael

Search in DiVA

By author/editor
Fakhrul Abedin, SarderThar, KyiMahmood, AamirGidlund, Mikael
By organisation
Department of Information Systems and Technology
In the same journal
IEEE Access
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 259 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf