Measurement and Assessment

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item
    Defining the Competence of Abstract Thinking and Evaluating CS-Students' Level of Abstraction
    ( 2019-01-08) Zehetmeier, Daniela ; Böttcher, Axel ; Brüggemann-Klein, Anne ; Thurner, Veronika
    Although it is commonly agreed that the competence of abstraction and abstract thinking is one of the most important competences in Computer Science, only a few of these sources define this competence and its processes in a precise manner. Furthermore there is a lack of instruments to test the competence of abstract thinking and to integrate it into teaching. This work will start to close the gap concerning the competence of abstract thinking by deriving a theoretical description of the competence construct of abstract thinking, focusing on a Computer Science perspective. Furthermore, we will present a coding manual based on the model, which can be used to evaluate student assignments. This coding manual is applied to examples of our teaching practice in order to demonstrate its validity.
  • Item
    Introducing Low-Stakes Just-in-Time Assessments to a Flipped Software Engineering Course
    ( 2019-01-08) Erdogmus, Hakan ; Gadgil, Soniya ; Peraire, Cecile
    Objective: We present a Teaching-as-Research project that implements a new intervention in a flipped software engineering course over two semesters. The short-term objective of the intervention was to improve students’ preparedness for live sessions. The long-term objective was to improve their knowledge retention evaluated in time-separated high-stakes assessments. Intervention: The intervention involved adding weekly low-stakes just-in-time assessments to course modules to motivate students to review assigned instructional materials in a timely manner. The assessments consisted of, per course module, two preparatory quizzes embedded within off-class instructional materials and a non-embedded in-class quiz. Method: Embedded assessments were deployed to two subgroups of students in an alternating manner. In-class assessments were deployed to all students. The impact of embedded assessments on in-class assessments and on final exam performance was measured. Results: Embedded assessments improved students’ preparedness for live sessions. The effect was statistically significant, but variable. Embedded assessments did not impact long-term knowledge retention assessed on final exam. We have decided to keep the intervention and deploy it to all students in the future.
  • Item
    Easing the Burden of Program Assessment: Web-based Tool Facilitates Measuring Student Outcomes for ABET Accreditation
    ( 2019-01-08) Schahczenski, Celia ; Van Dyne, Michele
    The rapid pace of technology and social change necessitates a process of continuous program improvement for academic programs. ABET accredits educational programs, ensuring that these programs meet criteria such as continuous program improvement. Continuously collecting data, analysis of that data to determine what is, and is not, working, and updating programs accordingly consumes considerable faculty and administrative time. Software tools can help. This paper describes a tool developed and used by our department. This software tool: 1. Reduced the burden of measuring student outcomes for members of our department for six years, and will continue to do so in the future. 2. Received praise by members of two ABET accreditation teams who suggested marketing the software to help other programs seeking, or maintaining, ABET accreditation. 3. Is undergoing enhancements for other departments in our school. The software was developed by students over multiple offerings of six courses in our curricula. Keywords—ABET accreditation
  • Item
    The Significance of Positive Verification in Unit Test Assessment
    ( 2019-01-08) Buffardi, Kevin ; Valdivia, Pedro
    This study investigates whether computer science students' unit tests can positively verify acceptable implementations. The first phase uses between-subject comparisons to reveal students' tendencies to write tests that yield inaccurate outcomes by either failing acceptable solutions or by passing implementations containing bugs. The second phase uses a novel all-function-pairs technique to compare a student's test performance, independently across multiple functions. The study reveals that students struggle with positive verification and doing so is associated with producing implementations with more bugs. Additionally, students with poor positive verification produce similar number of bugs as those with poor bug identification.
  • Item
    Introduction to the Minitrack on Measurement and Assessment
    ( 2019-01-08) Tenbergen, Bastian ; Ries, Benoit