Please use this identifier to cite or link to this item: http://hdl.handle.net/10125/71121

Could you please pay attention?’ Comparing in-person and MTurk Responses on a Computer Code Review Task

File Size Format  
0408.pdf 451.63 kB Adobe PDF View/Open

Item Summary

Title:Could you please pay attention?’ Comparing in-person and MTurk Responses on a Computer Code Review Task
Authors:Gibson, Anthony
Alarcon, Gene
Lee, Michael
Hamdan, Izz Aldin
Keywords:Crowd-based Platforms
careless responding
code review
mturk
Date Issued:05 Jan 2021
Abstract:The current study examined the differences in data quality across two environments (i.e., in a laboratory and online via Amazon’s Mechanical Turk) on a computer code review task. Researchers and practitioners often collect data online for the sake of convenience, as well as for obtaining a more generalizable sample of participants. The lack of social contact between the researchers and participants, however, may result in less effort dedicated to the experimental task resulting in poor quality data. The results of the current study showed that data quality—at least when measuring the individual difference variables—was drastically worsened when the experimental task was presented online. In contrast, we observed little differences in the experimental task perceptions across the two samples. Rather, participants spent significantly less time examining the computer code when completing the experiment online. The current study has implications for the effects of using online platforms (like MTurk) to collect experimental data.
Pages/Duration:10 pages
URI:http://hdl.handle.net/10125/71121
ISBN:978-0-9981331-4-0
DOI:10.24251/HICSS.2021.504
Rights:Attribution-NonCommercial-NoDerivatives 4.0 International
https://creativecommons.org/licenses/by-nc-nd/4.0/
Appears in Collections: Crowd-based Platforms


Please email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.

This item is licensed under a Creative Commons License Creative Commons