1 - 3 of 3
ItemUnderstanding the Adoption Intention of AI through the Ethics Lens( 2020-01-07)Understanding users and user behaviors in accepting new technologies such as AI has been ever more important. Meanwhile, information systems with AI inevitably engenders such ethical issues as transparency and accountability related to the consequences of recognition, decisions, and recommendations. Our work adds moral psychology variables to the Theory of Reasoned Action (TRA) in order to better explicate the adoption aspects of AI. For the research, we employed social desirability and self-consistency of moral psychology as underlying attitudes. And also, the moral norm is added to TRA to moderate the effect of the attitudes on the outcome variable. The empirical results indicate a direct and indirect role of the morality-related variables in explaining users’ AI adoption intentions. It was learned that moral psychology plays an important role in explaining user attitudes toward AI and subsequent intentions of adopting an AI system.
ItemToward an Understanding of Responsible Artificial Intelligence Practices( 2020-01-07)Artificial Intelligence (AI) is influencing all aspects of human and business activities nowadays. Although potential benefits emerged from AI technologies have been widely discussed in many current literature, there is an urgently need to understand how AI can be designed to operate responsibly and act in a manner meeting stakeholders’ expectations and applicable regulations. We seek to fill the gap by exploring the practices of responsible AI and identifying the potential benefits when implementing responsible AI practices. In this study, 10 responsible AI cases were selected from different industries to better understand the use of responsible AI in practices. Four responsible AI practices are identified, including governance, ethically design solutions, risk control and training and education and five strategies for firms who are considering to adopt responsible AI practices are recommended.