Responsible Innovation in Collaborative, Connected, and Intelligent Systems: Design, Implementation, and Governance
Permanent URI for this collection
Browse
Recent Submissions
Item A Two-Phased AI-Enabled Framework for Innovation with User-Generated Data from Consumer Review Sites(2025-01-07) Mirkovski, Kristijan; Kang , Jingda; Liu, Libo; Indulska , Marta; Liu, HaoUser-generated data from consumer review sites holds immense potential for driving product innovation, yet actionable research in this area remains limited. This paper addresses this gap by proposing a two-phased AI-enabled framework for leveraging consumer reviews throughout the idea selection and generation processes. Our framework utilizes advanced AI approaches to automate idea selection and generation in open innovation settings. The proposed framework aims to extract product innovation ideas from raw online reviews, employing cutting-edge machine learning for natural language processing and generative pre-trained transformers for natural language generation. This paper offers a novel AI-enabled approach for organizations to drive open innovation and improve product development processes.Item Introduction to the Minitrack on Responsible Innovation in Collaborative, Connected, and Intelligent Systems: Design, Implementation, and Governance(2025-01-07) Xiao, Bo; Abhari, Kaveh; Tan, Chee-WeeItem Algorithmic Accountability as a Virtue or a Mechanism? The Ethical Divide Among AI Developers(2025-01-07) Schmidt, Jan-Hendrik; Bartsch, Sebastian; Zweidinger, Yannik; Benlian, AlexanderAlgorithmic accountability is gaining prominence, driven by the ethical challenges of increasingly advanced information systems (IS) based on artificial intelligence (AI). Legal and practical initiatives often lack a clear accountability definition, leaving AI developers to develop their own understanding. In our qualitative study, we interviewed 17 AI developers to explore how their ethical orientations affect their understanding of algorithmic accountability and its professional and personal effects. Our findings indicate that consequentialist-oriented AI developers typically understand accountability as a mechanism for quality assurance in AI development, leading to operational impacts. Conversely, deontological-oriented AI developers tend to understand algorithmic accountability as a virtue they and their developed AI systems must live up to, often with significant ethical implications. Our study contributes to IS research by clarifying how ethical orientations shape algorithmic accountability understandings as either a mechanism or a virtue, which is crucial for effective management and communication in AI development projects.