Could you please pay attention?’ Comparing in-person and MTurk Responses on a Computer Code Review Task

Date
2021-01-05
Authors
Gibson, Anthony
Alarcon, Gene
Lee, Michael
Hamdan, Izz Aldin
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
4148
Ending Page
Alternative Title
Abstract
The current study examined the differences in data quality across two environments (i.e., in a laboratory and online via Amazon’s Mechanical Turk) on a computer code review task. Researchers and practitioners often collect data online for the sake of convenience, as well as for obtaining a more generalizable sample of participants. The lack of social contact between the researchers and participants, however, may result in less effort dedicated to the experimental task resulting in poor quality data. The results of the current study showed that data quality—at least when measuring the individual difference variables—was drastically worsened when the experimental task was presented online. In contrast, we observed little differences in the experimental task perceptions across the two samples. Rather, participants spent significantly less time examining the computer code when completing the experiment online. The current study has implications for the effects of using online platforms (like MTurk) to collect experimental data.
Description
Keywords
Crowd-based Platforms, careless responding, code review, mturk
Citation
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 54th Hawaii International Conference on System Sciences
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.