AI and the Future of Work
Permanent URI for this collection
Browse
Recent Submissions
Item Artificial Socialization? How Artificial Intelligence Applications Can Shape A New Era of Employee Onboarding Practices(2023-01-03) Ritz, Eva; Donisi, Fabio; Elshan, Edona; Rietsche, RomanOnboarding has always emphasized personal contact with new employees. Excellent onboarding can, for instance, extend an employee's stay, and improve loyalty. Even in a physical setting, the onboarding process is demanding for the newcomer and the onboarding organization. However, COVID-19 has made this process even more challenging by forcing a rapid shift from offline to online organizational onboarding practices. Organizations are adopting new technologies, such as artificial intelligence (AI), to support work processes during the pandemic, which could shape a new era of work practices. However, it has not been studied how AI applications can or should support onboarding. Therefore, our research conducts a literature review on current onboarding practices and uses expert interviews to evaluate AI's potential and pitfalls for each action. We contribute to the literature by presenting a holistic state-of-the-art picture of onboarding practices and evaluating potential application areas of AI in the onboarding process.Item AI Literacy - Towards Measuring Human Competency in Artificial Intelligence(2023-01-03) Pinski, Marc; Benlian, AlexanderArtificial intelligence (AI) has gained significant traction in information systems (IS) research in recent years. While past studies have identified many effects of AI technology on human-AI collaborations, there is a paucity in IS literature on the competencies of humans that affect this relationship. In this study, we set out to develop a measurement instrument (scale) for general AI literacy, that is humans’ socio-technical competencies regarding AI. We conducted a systematic literature review followed by five expert interviews to define and conceptualize the construct of general AI literacy and to generate an initial set of items. Furthermore, we performed two rounds of card sorting with six and five judges and a pre-test study with 50 participants to evaluate the developed scale. The validated measurement instrument contains five dimensions and 13 items. We provide empirical support for the measurement model and conclude with future research directions.Item ``I Felt Like I Wasn't Really Meant to be There'': Understanding Women's Perceptions of Gender in Approaching AI Design & Development(2023-01-03) Schulenberg, Kelsea; Hauptman , Allyson I.; Schlesener, Elizabeth A.; Watkins, Heather; Freeman, GuoWomen continue to enter and remain in AI development at a rate far lower than men, and this glaring gender gap has caused AI technologies to contain inherent bias in their design. While studies have explored the challenges women face in the field, little has been done to explore the influences of women's gender identities on how women approach gender in AI design. In this study, we conducted semi-structured interviews with eight women with diverse experiences in various areas of AI design in order to understand how women perceive the role of their gender identity within the AI design community and how those perceptions have influenced their design approach for AI systems. Our research provides first-hand empirical evidence from women’s own perspectives on how the enduring gender gap in the AI field is reinforcing harmful bias in designing and developing AI systems. We also propose initial design implications and highlight urgently needed future research for designing more inclusive AI technologies with diverse gender perspectives in mind.Item Conversational Agent as a Black Hat: Can Criticising Improve Idea Generation?(2023-01-03) Cvetkovic, Izabel; Rosenberg, Valeria; Bittner, EvaThe Ideate phase of Design Thinking is the source of many idea creations. In this context, criticism is considered a creativity killer, yet recent studies show that criticism can be beneficial. An example of this is the black hat of one creativity method: Six Thinking Hats. It points out the weaknesses of an idea so that they are eliminated by further refining. Previous research shows that conversational agents have an advantage over humans when criticizing because of their perceived neutrality. To investigate this, we developed and implemented a conversational agent and evaluated it using an A/B test. The results of the study show that the prototype is perceived as less neutral when it criticizes. Criticizing by the conversational agent can lead to higher quality ideas. This work contributes to a better understanding of conversational agents in the black hat role as well as of their neutrality.Item Fairness in Algorithmic Management: How Practices Promote Fairness and Redress Unfairness on Digital Labor Platforms(2023-01-03) Schulze, Laura; Trenz, Manuel; Cai, Zhao; Tan, Chee-WeeAlgorithmic management (AM) is employed on digital labor platforms (DLPs) to efficiently manage interactions between workers and clients. However, AM comes with ethical challenges, such as unfairness. Identifying best practices that counter these challenges promises to deliver actionable solutions. Therefore, we identify AM practices that workers deem particularly fair. We conduct seven online focus groups with a diverse set of platform workers and analyze the data through an organizational justice lens. Our findings reveal that AM practices can promote fairness by providing information, empowering workers, or autonomously executing tasks in their interest. Alternatively, in the case unfairness occurred, AM practices can redress unfairness. These practices include delegating dispute resolution to the involved actors, investigating evidence, and autonomously determining restorative consequences. Our findings have theoretical implications for fairness in algorithms, AM, and organizational justice literature. They might also be adopted in practice to improve workers’ conditions on DLPs.Item Approaches to Improve Fairness when Deploying AI-based Algorithms in Hiring – Using a Systematic Literature Review to Guide Future Research(2023-01-03) Rieskamp, Jonas; Hofeditz, Lennart; Mirbabaie, Milad; Stieglitz, StefanAlgorithmic fairness in Information Systems (IS) is a concept that aims to mitigate systematic discrimination and bias in automated decision making. However, previous research argued that different fairness criteria are often incompatible. In hiring, AI is used to assess and rank applicants according to their fit for vacant positions. However, various types of bias also exist for AI-based algorithms (e.g., using biased historical data). To reduce AI’s bias and thereby unfair treatment, we conducted a systematic literature review to identify suitable strategies for the context of hiring. We identified nine fundamental articles in this context and extracted four types of approaches to address unfairness in AI, namely pre-process, in-process, post-process, and feature selection. Based on our findings, we (a) derived a research agenda for future studies and (b) proposed strategies for practitioners who design and develop AIs for hiring purposes.Item Accepting the Familiar: The Effect of Perceived Similarity with AI Agents on Intention to Use and the Mediating Effect of IT Identity(2023-01-03) Alawi, Naif; De Vreede, Triparna; De Vreede, Gert-JanWith the rise and integration of AI technologies within organizations, our understanding of the impact of this technology on individuals remains limited. Although the IS use literature provides important guidance for organization to increase employees’ willingness to work with new technology, the utilitarian view of prior IS use research limits its application considering the new evolving social interaction between humans and AI agents. We contribute to the IS use literature by implementing a social view to understand the impact of AI agents on an individual’s perception and behavior. By focusing on the main design dimensions of AI agents, we propose a framework that utilizes social psychology theories to explain the impact of those design dimensions on individuals. Specifically, we build on Similarity Attraction Theory to propose an AI similarity-continuance model that aims to explain how similarity with AI agents influence individuals’ IT identity and intention to continue working with it. Through an online brainstorming experiment, we found that similarity with AI agents indeed has a positive impact on IT identity and on the intention to continue working with the AI agent.Item Artificial Intelligence and Digital Work: The Sociotechnical Reversal(2023-01-03) Fischer, Louise; Wunderlich, Nico; Baskerville, RichardA well-designed information system (IS) in the classical view comprises two interrelated yet different subsystems; one that represents the technological dimension of work; and one that represents the social dimension. When these subsystems are heralded as equally important, they constitute a sociotechnical whole, producing economic outcomes such as profit and efficiency, plus humanistic outcomes, such as engagement and well-being. We see, increasingly, this classical view becoming obliviated. In this conceptual paper, we reflect upon the role of humans and technology in these changing work environments. While technical aspects from Artificial Intelligence and digital technologies are dominating the social side of work, we suggest a sociotechnical reversal to happen. Whereas this technosocial reality might be well motivated by advances in efficiency and productivity, the effects on well-being and engagement are less well understood. Consequently, we provide a set of theoretically derived principles to guide these changes in the digital workplace.Item Introduction to the Minitrack on AI and the Future of Work(2023-01-03) De Vreede, Triparna; Cheng, Xusen; Siemon, Dominik