AI in Government

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item
    Exploring AI supported Citizen Argumentation on Urban Participation Platforms
    ( 2023-01-03) Borchers, Marten ; Tavanapour, Navid ; Bittner, Eva
    The paradigm shift in urban planning toward citizen participation originates from the Smart City concept, as politicians and scientists argue that citizens should be included in the design of their environment. This led to the development of urban participation platforms and was enhanced by the COVID-19 pandemic as on-site participation was unavailable. Past projects showed that urban participation platforms can reach thousands of citizens, but it became apparent that citizens' contributions vary widely and are sometimes not understandable and comprehensible which limits their value for urban projects. Therefore, we examined how an AI-based feedback system can increase citizens’ argumentation on urban platforms. For this, an explorative comparison of two prototypes was conducted by applying Argumentation Theory and Mayring's qualitative content analysis to empirically analyze collected data. The findings highlight that the developed AI-based feedback system supports citizens and leads to more argumentative and comprehensible argumentations on urban participation platforms.
  • Item
    Federated Learning as a Solution for Problems Related to Intergovernmental Data Sharing
    ( 2023-01-03) Sprenkamp, Kilian ; Delgado Fernandez, Joaquin ; Eckhardt, Sven ; Zavolokina, Liudmila
    To address global problems, intergovernmental collaboration is needed. Modern solutions to these problems often include data-driven methods like artificial intelligence (AI), which require large amounts of data to perform well. However, data sharing between governments is limited. A possible solution is federated learning (FL), a decentralised AI method created to utilise personal information on edge devices. Instead of sharing data, governments can build their own models and just share the model parameters with a centralised server aggregating all parameters, resulting in a superior overall model. By conducting a structured literature review, we show how major intergovernmental data sharing challenges like disincentives, legal and ethical issues as well as technical constraints can be solved through FL. Enhanced AI while maintaining privacy through FL thus allows governments to collaboratively address global problems, which will positively impact governments and citizens.
  • Item
    Tertiary Study on the Use of Artificial Intelligence for Service Delivery: A Bibliometric Analysis of Systematic Literature Reviews
    ( 2023-01-03) Chouikh, Arbi ; Khechine, Hager ; Gagnon, Marie-Pierre
    Despite the large number of systematic literature reviews (SLRs) on the use of artificial intelligence (AI) for service delivery, scholars suggest more scientific evidence. However, the direction that these reviews will take depends on the knowledge accumulated in the existing literature. Therefore, the objective of this research is to explore SLRs that have synthesized the use of AI for service delivery. We conducted a tertiary study, which consists of a bibliometric analysis of SLRs. We searched SLRs published over the last ten years in six bibliographic databases. Sixty-six studies meeting the inclusion criteria were processed through a bibliometric analysis in which we combined article metadata with data extracted from the full-text review. The results describe the publication trends of SLRs, their application domains, and the particularities of the private and public sectors. Recommendations for future SLRs on the use of AI for service delivery are proposed.
  • Item
    Introduction to the Minitrack on AI in Government
    ( 2023-01-03) Carter, Lemuria ; Liu, Dapeng ; Gasco-Hernandez, Mila
  • Item
    What Other Factors Might Impact Building Trust in Government Decisions Based on Decision Support Systems, Except for Transparency and Explainability?
    ( 2023-01-03) Leewis, Sam ; Smit, Koen
    Decision Support Systems (DSS) are increasingly being used to support operational decision-making using large amounts of data. One key aspect to successful adoption is that the user trusts the DSS. Large contributors to trust often mentioned in literature and practice are transparency and explainability. But what happens when a DSS is transparent and explainable by design? What other contributors to trust are relevant is the main focus of this paper, in the context of Dutch governmental subject-matter experts designing and working with DSSs. We used a Mixed-Method Sequential Explanatory Design in which a survey was conducted to gather empirical data. The findings present 20 focal points contributing toward trust in DSS. These focal points require future research, specifically on considering these for development by the design of a DSS. Ultimately, this could help in increasing the adoption of DSSs in general.