The Cognitive Effects of Machine Learning Aid in Domain-Specific and Domain-General Tasks

Divis, Kristin
Howell, Breannan
Matzen, Laura
Stites, Mallory
Gastelum, Zoe
Journal Title
Journal ISSN
Volume Title
With machine learning (ML) technologies rapidly expanding to new applications and domains, users are collaborating with artificial intelligence-assisted diagnostic tools to a larger and larger extent. But what impact does ML aid have on cognitive performance, especially when the ML output is not always accurate? Here, we examined the cognitive effects of the presence of simulated ML assistance—including both accurate and inaccurate output—on two tasks (a domain-specific nuclear safeguards task and domain-general visual search task). Patterns of performance varied across the two tasks for both the presence of ML aid as well as the category of ML feedback (e.g., false alarm). These results indicate that differences such as domain could influence users’ performance with ML aid, and suggest the need to test the effects of ML output (and associated errors) in the specific context of use, especially when the stimuli of interest are vague or ill-defined.
Applications of Human-AI Collaboration: Insights from Theory and Practice, decision making, human-ai-collaboration, machine learning errors, visual search
Access Rights
Email if you need this content in ADA-compliant format.