Accountability, Evaluation, and Obscurity of AI Algorithms

Permanent URI for this collection


Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making
    ( 2022-01-04) Schoeffer, Jakob ; Machowski, Yvette ; Kühl, Niklas
    Automated decision systems (ADS) have become ubiquitous in many high-stakes domains. Those systems typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly for affected individuals. As a result, ADS are prone to deficient oversight and calibration, which can lead to undesirable (e.g., unfair) outcomes. In this work, we conduct an online study with 200 participants to examine people’s perceptions of fairness and trustworthiness towards ADS in comparison to a scenario where a human instead of an ADS makes a high-stakes decision—and we provide thorough identical explanations regarding decisions in both cases. Surprisingly, we find that people perceive ADS as fairer than human decision-makers. Our analyses also suggest that people’s AI literacy affects their perceptions, indicating that people with higher AI literacy favor ADS more strongly over human decision-makers, whereas low-AI-literacy people exhibit no significant differences in their perceptions.
  • Item
    Managing Temporal Dynamics of Filter Bubbles
    ( 2022-01-04) Hirschmeier, Stefan
    Filter bubbles have attracted much attention in recent years in terms of their impact on society. Whereas it is commonly agreed that filter bubbles should be managed, the question is still how. We draw a picture of filter bubbles as dynamic, slowly changing constructs that underlie temporal dynamics and that are constantly influenced by both machine and human. Anchored in a research setting with a major public broadcaster, we follow a design science approach on how to design the temporal dynamics in filter bubbles and how to design users' influence over time. We qualitatively evaluate our approach with a smartphone app for personalized radio and found that the adjustability of filter bubbles leads to a better co-creation of information flows between information broadcaster and listener.
  • Item
    Computation, Rule Following, and Ethics in AIs
    ( 2022-01-04) Seo, Hyunjin ; Thorson, Stuart
    As interest in developments of artificial intelligence (AI) models has grown, so has concern that they embed unintended, undesirable risks and/or fail to properly align with human values and norms. In the extreme case, it is argued that AI may pose existential risks to the human species. We consider computational entities satisfying the Extended Church-Turing Thesis and claim these include both human and non-quantum based AI. We then introduce rules, including moral and ethics rules, as linguistic entities and illustrate how they can be encoded as computational objects. Following Wittgenstein, we show that rules and rule following cannot be purely private. Whether particular rules are being followed in specific instances depends upon ongoing engagement with a language community. However, in situations involving application of ethics rules there may be no widely agreed community to use in evaluating whether rules are being followed properly. Indeed, how are we to determine which are appropriate ethical rules? Every appeal to rule following itself is based upon more rules; it is rules all the way down. Deliberative reasoning is at the core of moral and ethics discourse and issues in conceptualizing rule-following AIs become of particular interest.
  • Item