Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Comprehensibility of Editor-Integrated LLM-Generated Unit Tests
Mid Sweden University, Faculty of Science, Technology and Media, Department of Communication, Quality Management, and Information Systems (2023-).
Mid Sweden University, Faculty of Science, Technology and Media, Department of Communication, Quality Management, and Information Systems (2023-).
2024 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

Testing is a critical part of software development, used to ensure that the created software system meets expected functional requirements and to identify potential errors.The testing process is also one of the most time-consuming and expensive tasks during development. A central part of contemporary software testing is referred to as unit testing, where test cases are created to ensure the validity of individual components of a software system. While there already exist tools that can automatically generate tests, Large Language Models present new possibilities. Even if the tests are generated it is important that they are maintainable as tests often require modification to align with evolving software. Comprehensibility is a main component of maintainability. In this research, we have conducted a comparative analysis of the comprehensibility of tests generated by different LLM-based tools and compared them to manually written tests. Our research indicates that the general comprehensibility of LLM-generated test cases can surpass those of manually written test cases. Accordingly, we further found that LLM-generated test suites tend to be less complex and include less smelly code while being comparable in general readability. Lastly, the underlying functionality of LLM-based test generators, including design choices and model selection, significantly impacts how well LLM-based unit test generators create comprehensive tests. These same design and model choices might further affect the line and branch coverage of the resulting test suites. Lastly, we found that LLM-generated tests tend to have worse line and branch coverage compared to manually written tests.

Place, publisher, year, edition, pages
2024. , p. 24
Keywords [en]
Test code, Readability, Comprehensibility, Maintainability, Large Language Models
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:miun:diva-51899OAI: oai:DiVA.org:miun-51899DiVA, id: diva2:1882328
Subject / course
Computer Engineering DT1
Educational program
Software Engineering TPVAG 120/180 higher education credits
Supervisors
Examiners
Available from: 2024-07-05 Created: 2024-07-05 Last updated: 2024-07-05Bibliographically approved

Open Access in DiVA

fulltext(667 kB)304 downloads
File information
File name FULLTEXT01.pdfFile size 667 kBChecksum SHA-512
6677ee43bf449f5605d94c63eba9ea66968b4ebe2f02dff4be9c5886fd750968ae207cfcfc936fe00e05a821c3f4aaf8db2f4f53937580bafdbd00f0afd6f35a
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Björk, FredrikLindh, Joel
By organisation
Department of Communication, Quality Management, and Information Systems (2023-)
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 304 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 666 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf