Semi-Automatic Assessment of Modeling Exercises using Supervised Machine Learning

dc.contributor.authorKrusche, Stephan
dc.date.accessioned2021-12-24T17:24:04Z
dc.date.available2021-12-24T17:24:04Z
dc.date.issued2022-01-04
dc.description.abstractMotivation: Modeling is an essential skill in software engineering. With rising numbers of students, introductory courses with hundreds of students are becoming standard. Grading all students’ exercise solutions and providing individual feedback is time-consuming. Objectives: This paper describes a semi-automatic assessment approach based on supervised machine learning. It aims to increase the fairness and efficiency of grading and improve the provided feedback quality. Method: While manually assessing the first submitted models, the system learns which elements are correct or wrong and which feedback is appropriate. The system identifies similar model elements in subsequent assessments and suggests how to assess them based on scores and feedback of previous assessments. While reviewing new submissions, reviewers apply the suggestions or adjust them and manually assess the remaining model elements. Results: We empirically evaluated this approach in three modeling exercises in a large software engineering course, each with more than 800 participants, and compared the results with three manually assessed exercises. A quantitative analysis reveals an automatic feedback rate between 65 % and 80 %. Between 4.6 % and 9.6 % of the suggestions had to be manually adjusted. Discussion: Qualitative feedback indicates that semi-automatic assessment reduces the effort and improves consistency. Few participants noted that the proposed feedback sometimes does not fit the context of the submission and that the selection of feedback should be further improved.
dc.format.extent10 pages
dc.identifier.doi10.24251/HICSS.2022.108
dc.identifier.isbn978-0-9981331-5-7
dc.identifier.urihttp://hdl.handle.net/10125/79439
dc.language.isoeng
dc.relation.ispartofProceedings of the 55th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectAssessment, Evaluation and Measurements (AEM)
dc.subjecteducation
dc.subjectlearning management system
dc.subjectlearning success
dc.subjectonline editor
dc.subjectsoftware engineering
dc.titleSemi-Automatic Assessment of Modeling Exercises using Supervised Machine Learning
dc.type.dcmitext

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0086.pdf
Size:
955.01 KB
Format:
Adobe Portable Document Format