Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Automated Boundary Identification for Machine Learning Classifiers
Mid Sweden University, Faculty of Science, Technology and Media, Department of Communication, Quality Management, and Information Systems (2023-). (Software Engineering and Education)ORCID iD: 0000-0001-9372-3416
Chalmers University of Technology.ORCID iD: 0000-0002-5179-4205
2024 (English)In: SBFT '24: Proceedings of the 17th ACM/IEEE International Workshop on Search-Based and Fuzz Testing, Association for Computing Machinery (ACM), 2024, p. 1-8Conference paper, Published paper (Refereed)
Abstract [en]

AI and Machine Learning (ML) models are increasingly used as(critical) components in software systems, even safety-critical ones. This puts new demands on the degree to which we need to test them and requires new and expanded testing methods. Recent boundary-value identification methods have been developed and shown to automatically find boundary candidates for traditional, non-MLsoftware: pairs of nearby inputs that result in (highly) differing outputs. These can be shown to developers and testers, who can judge if the boundary is where it is supposed to be. Here, we explore how this method can identify decision boundaries of ML classification models. The resulting ML Boundary Spanning Algorithm (ML-BSA) is a search-based method extending previous work in two main ways. We empirically evaluate ML-BSA on seven ML datasets and show that it better spans and thus better identifies the entire classification boundary(ies). The diversity objective helps spread out the boundary pairs more broadly and evenly. This, we argue, can help testers and developers better judge where a classification boundary actually is, compare to expectations, and then focus further testing, validation, and even further training and model refinement on parts of the boundary where behaviour is not ideal.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024. p. 1-8
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:miun:diva-52232DOI: 10.1145/3643659.3643927ISI: 001324631400001Scopus ID: 2-s2.0-85205777142ISBN: 979-8-4007-0562-5 (electronic)OAI: oai:DiVA.org:miun-52232DiVA, id: diva2:1892472
Conference
SBFT '24: 17th ACM/IEEE International Workshop on Search-Based and Fuzz Testing, Lisbon, Portugal, 14 April, 2024
Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2024-11-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Dobslaw, Felix

Search in DiVA

By author/editor
Dobslaw, FelixFeldt, Robert
By organisation
Department of Communication, Quality Management, and Information Systems (2023-)
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 78 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf