Mid Sweden University

miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Experiences with Remote Examination Formats in Light of GPT-4
Mid Sweden University, Faculty of Science, Technology and Media, Department of Communication, Quality Management, and Information Systems (2023-).ORCID iD: 0000-0001-9372-3416
Mid Sweden University, Faculty of Science, Technology and Media, Department of Communication, Quality Management, and Information Systems (2023-).
2023 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2023, p. 220-225Conference paper, Published paper (Refereed)
Abstract [en]

Sudden access to the rapidly improving large language model GPT by OpenAI forces educational institutions worldwide to revisit their exam procedures. In the pre-GPT era, we successfully applied oral and open-book home exams for two courses in the third year of our predominantly remote Software Engineering BSc program. We ask in this paper whether our current open-book exams are still viable or whether a move back to a legally compliant but less scalable oral exam is the only workable alternative. We further compare work-effort estimates between oral and open-book exams and report on differences in throughput and grade distribution over eight years to better understand the impact of examination format on the outcome. Examining GPT-4 on the most recent open-book exams showed that our current Artificial Intelligence and Reactive Programming exams are not GPT v4 proof. Three potential weaknesses of GPT are outlined. We also found that grade distributions have largely been unaffected by the examination format, opening up for a move to oral examinations only if needed. Throughput was higher for open-book exam course instances (73% vs 64%), while fail rates were too (12% vs 7%), with teacher workload increasing even for smaller classes. We also report on our experience regarding effort. Oral examinations are efficient for smaller groups but come with caveats regarding intensity and stress. 

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023. p. 220-225
Keywords [en]
ChatGPT, Examination Formats, Oral Examinations, Software Engineering Education
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:miun:diva-49025DOI: 10.1145/3593663.3593695ISI: 001124146500029Scopus ID: 2-s2.0-85163528803ISBN: 9781450399562 (print)OAI: oai:DiVA.org:miun-49025DiVA, id: diva2:1787665
Conference
ACM International Conference Proceeding Series
Available from: 2023-08-14 Created: 2023-08-14 Last updated: 2024-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Dobslaw, FelixBergh, Peter

Search in DiVA

By author/editor
Dobslaw, FelixBergh, Peter
By organisation
Department of Communication, Quality Management, and Information Systems (2023-)
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 205 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf