AI, Organizing, and Management

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • Item
    The Birth of AI-driven Nudges
    (2023-01-03) Nyman, Stig
    AI methods allow for a multitude of new forms of managerial control. One is algorithmic nudging, in which organizations use AI methods to control workers through targeted recommendations. Drawing upon Michel Foucault’s analytical strategies, the paper examines the intellectual heritage and ideological roots of AI-nudges. Scholars have commented on the resemblance between algorithmic nudging and Taylorist scientific management. However, as this paper shows the discourse of AI-nudges also shares significant linages with other subsequent opposing managerial paradigms. Building on the analysis of AI-nudges linages, the paper discusses how their use implies three contestable presumptions 1) that work can be codified, 2) that workers require autonomy over their work, and 3) that there is no existing conflict of interest between workers and the organization.
  • Item
    Introduction to the Minitrack on AI, Organizing, and Management
    (2023-01-03) Nickerson, Jeff; Saltz, Jeffrey; Seidel, Stefan; Lindberg, Aron
  • Item
    Why it Remains Challenging to Assess Artificial Intelligence
    (2023-01-03) Brecker, Kathrin; Lins, Sebastian; Sunyaev, Ali
    Artificial Intelligence (AI) assessment to mitigate risks arising from biased, unreliable, or regulatory non-compliant systems remains an open challenge for researchers, policymakers, and organizations across industries. Due to the scattered nature of research on AI across disciplines, there is a lack of overview on the challenges that need to be overcome to move AI assessment forward. In this study, we synthesize existing research on AI assessment applying a descriptive literature review. Our study reveals seven challenges along three main categories: ethical implications, regulatory gaps, and technical limitations. This study contributes to a better understanding of the challenges in AI assessment so that AI researchers and practitioners can resolve these challenges to move AI assessment forward.
  • Item
    Keeping the Organization in the Loop as a General Concept for Human-Centered AI: The Example of Medical Imaging
    (2023-01-03) Herrmann, Thomas; Pfeiffer, Sabine
    This study emanates from work on human-centered AI and the claim of “keeping the organiza-tion in the loop”. A previous study suggests a sys-tematic framework of organizational practices in the context of predictive maintenance, and identified four cycles: using AI, customizing AI, original task handling with support of AI, and dealing with con-textual changes. Since we assume that these findings can be generalized for other kinds of applications of Machine Learning (ML), we contrast the manage-ment activities that support the four cycles and their interplay with a widely different domain: the usage of AI for radiology. Our literature analysis reveals a series of overlaps with the existing framework, but also results in the need for extensions, such as holis-tic consideration of workflows or supervision and quality assurance.
  • Item
    How Can You Verify that I Am Using AI? Complementary Frameworks for Describing and Evaluating AI-Based Digital Agents in their Usage Contexts
    (2023-01-03) Alter, Steven
    This essay explains complementary frameworks for understanding and managing AI in usage contexts. In contrast with broad generalizations about the nature and impact of AI, those frameworks focus on specific AI-based digital agents used by people and/or machines performing purposeful activities in business, home, or societal environments. The agent responsibility (AR) framework helps in describing roles and responsibilities of specific AI-based digital agents in their usage contexts. The agent evaluation (AE) framework identifies six criteria that different stakeholders might use for evaluating AI-based digital agents.
  • Item
    Context Matters: The Use of Algorithmic Management Mechanisms in Platform, Hybrid, and Traditional Work Contexts
    (2023-01-03) Lippert, Isabell; Kirchner, Kathrin; Wiener, Martin
    Emerged from platform organizations, algorithmic management (AM) refers to a data-driven approach in which intelligent algorithms are employed to automate managerial functions. Given its organizational benefits (e.g., efficiency gains), AM is also increasingly used in other work contexts, including traditional organizations (with permanent employees). Against this backdrop, our study investigates what AM mechanisms are used in different organizational work contexts and to what extent, and why, these mechanisms translate to other contexts. We do so by systematically analyzing and synthesizing knowledge from 45 studies. Our results point to seven usage patterns regarding the contextual translatability of AM mechanisms. For example, while we find that some mechanisms are used across contexts but with differing intentions, we also identify several context-specific AM mechanisms that are not (easily) translatable. We conclude by discussing factors that help explain the identified usage patterns (e.g., worker status and skill level) and promising avenues for future research.
  • Item
    Seven Elements of Phronesis: A Framework for Understanding Judgment in Relation to Automated Decision-Making
    (2023-01-03) Koutsikouri, Dina; Hylving, Lena; Lindberg, Susanne; Bornemark, Jonna
    This conceptual paper aims to explore judgment in the context of automated decision-making systems (ADS). To achieve this, we adopt a modern version of Aristotle’s notion of phronesis to understand judgment. We delineate seven elements of judgment which provide insights into what humans are better at, and what AI is better at in relation to automated decision-making. These elements are sources of knowledge that guide action including not-knowing, emotions, sensory perception, experience, intuition, episteme, and techne. Our analysis suggests that most of these attributes are not transferable to AI systems, because judgment in human decision-making requires the integration of all which involves considering the contextual and affective resources of phronesis, and the competence to make value judgments. The paper contributes to unpack human judgment capacities and what needs to be cultivated to achieve ‘good’ AI systems that serves humanity as well as guiding future information systems researchers to explore human-AI judgment further.