AI in Government

Permanent URI for this collectionhttps://hdl.handle.net/10125/112457

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item type: Item ,
    Acceptance of Artificial Intelligence Algorithms in Local Governance: Citizens’ Perspective
    (2026-01-06) Roszczynska-Kurasinska, Magdalena; Domaradzka, Anna; Trochymiak, Mateusz; Wnuk, Anna
    Based on data from three representative surveys conducted in Singapore, Tallinn, and Warsaw, this study examines individual-level factors influencing the acceptance of AI in local governance. Drawing on existing literature, we identified four key variables: a) citizen efficacy, b) confidence in local governance, c) experience with technology, d) technology-related anxiety. We conducted separate Structural Equation Models for each city to uncover context-specific dynamics. Our results reveal a consistent positive relation between confidence in governance and AI acceptance. In Singapore, citizens demonstrate higher individual agency, with both civic efficacy and institutional confidence driving AI acceptance, while prior technology experience plays a lesser role. In Warsaw, confidence in governance and technology-related anxiety emerge as strong predictors. In Tallinn, technology-related anxiety is the only significant factor, suggesting a more emotionally driven public approach toward AI. These findings highlight the importance of sociopolitical context in shaping public attitudes toward algorithmic decision-making in local governance.
  • Item type: Item ,
    Historical Homologation in AI Algorithmic Computation: When the Past Decides Your Future
    (2026-01-06) Cordella, Antonio; Gualdi, Francesco
    This paper introduces “historical homologation”, the systematic tendency of algorithms to make future decisions match past patterns regardless of contemporary evidence. Analyzing the 2020 UK A-level grading controversy, where algorithms downgraded 40% of teacher assessments, we demonstrate how the Ofqual DCP algorithm was designed to protect grade distributions rather than predict individual achievement. Through analysis of 55,000 schools, we identify three core mechanisms. Historical anchoring transformed 2017-19 grade averages into computational rules functioning as hard ceilings. Individual erasure compressed all achievement data into class averages, systematically disadvantaging high-achievers in lower-performing schools. Temporal smoothing operated as a low-pass filter, pulling trajectories back toward historical means. These mechanisms interact synergistically to create computational determinism, the structural necessity that algorithms reproduce rather than transcend historical patterns. This reveals historical homologation as an ontological constraint where historically anchored algorithms shape social futures by overlooking the very changes and exceptions that systems should prioritize
  • Item type: Item ,
    Tool-Augmented LLMs for Rapid Data Insights: Empowering Non-Expert Users in Open Government Data Contexts
    (2026-01-06) Georgi, Maximilian
    Open Government Data (OGD) initiatives aim to foster transparency and innovation, yet actual usage remains low due to limited user resources, low data literacy, and a lack of supportive tools. Adopting a Design Science Research (DSR) approach, this study explores how systems must be designed to enable non-expert users to effectively interact with OGD. We propose a design theory comprising design requirements, design principles, and design features, which we instantiate in a prototypical system based on the ChatGPT platform. The core design integrates large language models (LLMs) with tool augmentation techniques to enable fully automated data retrieval, analysis, visualization, and interpretation through natural language interaction. Initial formative evaluations indicate that tool-augmented LLMs can substantially lower interaction barriers for non-expert users, while limitations in accuracy and reliability remain. Our study contributes prescriptive design knowledge and practical guidance for developing advanced natural language interfaces for OGD platforms.
  • Item type: Item ,
    Responsible implementation of AI in Government: A Systematic Review of Governance and Organizational Structures
    (2026-01-06) Peters, Isabell
    Responsible Artificial Intelligence (RAI) has emerged as a central concept to guide the development and use of artificial intelligence in government, yet its institutional implementation remains insufficiently understood. This study conducts a systematic literature review of 24 peer-reviewed articles to synthesize how organizational and governance structures enable or hinder the responsible implementation of AI in government. The findings reveal that while transparency, accountability, and legality are widely invoked as guiding principles, their realization depends on institutional capacity, organizational commitment, and the presence of formal governance mechanisms. The study contributes by consolidating fragmented evidence on RAI in government into a structured synthesis of organizational and governance determinants. Beyond mapping existing studies, it develops a conceptual framing that highlights how normative principles, organizational capacities, and governance mechanisms interact to shape implementation, and reveals the symbolic orientation of many current governance practices as a key challenge for moving toward actionable accountability.
  • Item type: Item ,
    Introduction to the Minitrack on AI in Government
    (2026-01-06) Liu, Dapeng; Carter, Lemuria; Hand, Laura