Responsible AI Governance for Connected Intelligence
Permanent URI for this collectionhttps://hdl.handle.net/10125/112411
Browse
Recent Submissions
Item type: Item , Investigating the Impact of Rewards and Sanctions on Developers’ Proactive AI Accountability Behavior(2026-01-06) Nguyen, Long Hoang; Du, Guangyu; Lins, Sebastian; Sunyaev, AliAccountability of artificial intelligence (AI)-based systems is often addressed reactively, mainly after harm occurs. This study shifts toward proactive approaches, highlighting AI developers’ role in risk mitigation. Proactive AI accountability behavior refers to self-initiated, future-oriented actions that go beyond formal job roles to justify developers’ actions and decisions, and to facilitate the clear attribution of accountability. Drawing on Proactive Motivation Theory, we conducted an online experiment (n = 264) to investigate how governance mechanisms (rewards vs. sanctions) and motivational states impact such behavior. Our results reveal flexible role orientation as the key driver of proactive behavior and how rewards and sanctions impact such a mindset. We contribute by conceptualizing proactive AI accountability behavior and providing a theoretical model that explains its emergence, underscoring the importance of using rewards to foster a proactive mindset alongside sanctions as guardrails against harmful initiatives.Item type: Item , Polycentric Generative‑Assurance Theory: Toward Adaptive Governance in Generative AI-Augmented Software Assurance(2026-01-06) Safaei Pour, Morteza; Abhari, Kaveh; Fathi, FarzadThe integration of generative AI (GenAI) into software development is transforming how code is authored, reviewed, and assured. While GenAI boosts productivity and creativity, it disrupts longstanding assurance frameworks, introducing epistemic opacity, validation deficits, accountability ambiguities, and governance challenges. This paper introduces Polycentric Generative-Assurance Theory (PGAT), a sociotechnical framework explaining how trust in AI-generated code is sustained through five interdependent responsibilities: epistemic mapping, adversarial socio-technical analysis, meta-validation, computational ethics, and evolutionary governance. Our findings reveal that assurance is no longer linear or role-bound, but rather a distributed, adaptive, and emergent practice. PGAT reframes assurance as a responsible process of trust orchestration, where multiple responsibilities coalesce to ensure the reliability, maintainability, and ethical integrity of software development practices.Item type: Item , Introduction to the Minitrack on Responsible AI Governance for Connected Intelligence(2026-01-06) Xiao, Bo; Abhari, Kaveh; Winter, Jenifer; Tan, Chee-Wee
