Investigating the Impact of Rewards and Sanctions on Developers’ Proactive AI Accountability Behavior
Loading...
Files
Date
Contributor
Advisor
Editor
Performer
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Interviewee
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Journal Name
Volume
Number/Issue
Starting Page
790
Ending Page
Alternative Title
Abstract
Accountability of artificial intelligence (AI)-based systems is often addressed reactively, mainly after harm occurs. This study shifts toward proactive approaches, highlighting AI developers’ role in risk mitigation. Proactive AI accountability behavior refers to self-initiated, future-oriented actions that go beyond formal job roles to justify developers’ actions and decisions, and to facilitate the clear attribution of accountability. Drawing on Proactive Motivation Theory, we conducted an online experiment (n = 264) to investigate how governance mechanisms (rewards vs. sanctions) and motivational states impact such behavior. Our results reveal flexible role orientation as the key driver of proactive behavior and how rewards and sanctions impact such a mindset. We contribute by conceptualizing proactive AI accountability behavior and providing a theoretical model that explains its emergence, underscoring the importance of using rewards to foster a proactive mindset alongside sanctions as guardrails against harmful initiatives.
Description
Citation
DOI
Extent
10 pages
Format
Type
Conference Paper
Geographic Location
Time Period
Related To
Proceedings of the 59th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Catalog Record
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.
