1 - 4 of 4
ItemRipples of Change – An AI Job Crafting Model for Human-in-Control( 2022-01-04)Introducing a new Artificial Intelligence (AI) system disrupts workers’ sense of control. To restore it, individual workers are likely to engage in self-initiated changes to their jobs. We build on job crafting theory and extend it to propose a theoretical model explaining the ripple effect of changes from tasks to skills, relationships, and finally job cognition. We introduce the concept of human-in-control (one’s perception of their ability to deliver desired work outcomes in a work context involving AI) as the goal of the job crafting process. Our work provides a novel and important perspective on job transformation with AI. As such, it opens numerous avenues for research in this nascent stream.
ItemRequirements for an IT Support System based on Hybrid Intelligence( 2022-01-04)In our digital world, all companies need IT support. IT support staff are under increasing pressure solving increasingly heterogeneous user tickets. Hybrid intelligence could solve many issues due to the combination of machine power and individual strengths of humans. As a part of a bigger design science research project, this paper derives requirements for an IT support system based on hybrid intelligence (ISSHI): In total 17 literature-based requirements and 21 interview-based requirements with IT-support managers and support agents from three different companies were consolidated into 24 requirements to inform an ISSHI system architecture. This architecture serves as a foundation for future research regarding hybrid intelligence in IT support.
ItemAlgorithmically Controlled Automated Decision-Making and Societal Acceptability: Does Algorithm Type Matter?( 2022-01-04)As technological capabilities expand, an increasing number of decision-making processes (e.g., rankings, selections, exclusions) are being delegated to computerized systems. In this paper, we examine the societal acceptability of a consequential decision-making system (university admission) to those subject to the decision (i.e., applicants). We analyze two key drivers: the nature of the decision-making agent (a human vs an algorithm) and the decision-making logic used by the agents (predetermined vs emerging). Consistent with uniqueness neglect theory, we propose that applicants will be more positive toward the use of human agents compared to computerized systems. Consistent with the theory of procedural justice, we further argue that applicants will find the use of a predetermined logic to be more acceptable than an emerging logic. We present the details and results of a factorial survey designed to test our theoretical model.