Artificial Intelligence-based Assistants

Permanent URI for this collection


Recent Submissions

Now showing 1 - 5 of 7
  • Item
    When Do Customers Perceive Artificial Intelligence as Fair? An Assessment of AI-based B2C E-Commerce
    ( 2022-01-04) Weith, Helena ; Matt, Christian
    Artificial intelligence (AI) enables new opportunities for business-to-consumer (B2C) e-commerce services, but it can also lead to customer dissatisfaction if customers perceive the implemented service not to be fair. While we have a broad understanding of the concept of fair AI, a concrete assessment of fair AI from a customer-centric perspective is lacking. Based on systemic service fairness, we conducted 20 in-depth semi-structured customer interviews in the context of B2C e-commerce services. We identified 19 AI fairness rules along four interrelated fairness dimensions: procedural, distributive, interpersonal, and informational. By providing a comprehensive set of AI fairness rules, our research contributes to the information systems (IS) literature on fair AI, service design, and human-computer interaction. Practitioners can leverage these rules for the development and configuration of AI-based B2C e-commerce services.
  • Item
    Match or Mismatch? How Matching Personality and Gender between Voice Assistants and Users Affects Trust in Voice Commerce
    ( 2022-01-04) Reinkemeier, Fabian ; Gnewuch, Ulrich
    Despite the ubiquity of voice assistants (VAs), they see limited adoption in the form of voice commerce, an online sales channel using natural language. A key barrier to the widespread use of voice commerce is the lack of user trust. To address this problem, we draw on similarity-attraction theory to investigate how trust is affected when VAs match the user’s personality and gender. We conducted a scenario-based experiment (N = 380) with four VAs designed to have different personalities and genders by customizing only the auditory cues in their voices. The results indicate that a personality match increases trust, while the effect of a gender match on trust is non-significant. Our findings contribute to research by demonstrating that some types of matches between VAs and users are more effective than others. Moreover, we reveal that it is important for practitioners to consider auditory cues when designing VAs for voice commerce.
  • Item
    Ecosystem Intelligence for AI-based Assistant Platforms
    ( 2022-01-04) Schmidt, Rainer ; Alt, Rainer ; Zimmermann, Alfed
    Digital assistants like Alexa, Google Assistant, or Siri have seen a large adoption over the past years. Using artificial intelligence (AI) technologies, they provide a vocal interface to physical devices as well as to digital services and have spurred an entire new eco-system. This comprises the big tech companies themselves, but also a strongly growing community of developers that make these functionalities available via digital platforms. At present, only few research is available to understand the structure and the value creation logic of these AI-based assistant platforms and their ecosystem. This research adopts ecosystem intelligence to shed light on their structure and dynamics. It combines existing data collection methods with an automated approach that proves useful in deriving a network-based conceptual model of Amazon's Alexa assistant platform and ecosystem. It shows that skills are a key unit of modularity in this ecosystem, which is linked to other elements such as service, data, and money flows. It also suggests that the topology of the Alexa ecosystem may be described using the criteria reflexivity, symmetry, variance, strength, and centrality of the skill coactivations. Finally, it identifies three ways to create and capture value on AI-based assistant platforms. Surprisingly only a few skills use a transactional business model by selling services and goods but many skills are complementary and provide information, configuration, and control services for other skill provider products and services. These findings provide new insights into the highly relevant ecosystems of AI-based assistant platforms, which might serve enterprises in developing their strategies in these ecosystems. They might also pave the way to a faster, data-driven approach for ecosystem intelligence.
  • Item
    Consumer Adoption of Artificial Intelligence: A Review of Theories and Antecedents
    ( 2022-01-04) Bawack, Ransome ; Desveaud, Kathleen
    Recently, people are increasingly adopting technologies powered by artificial intelligence (AI) in their everyday lives. Several researchers have investigated this phenomenon using several theoretical perspectives to explain the motivations behind such behaviour. Our paper reviews this body of knowledge to highlight the technologies, theories, and antecedents of AI adoption investigated this far in academic research. By analysing publications found in Harzing's Journal Quality List, this paper identifies 52 publications on user adoption of AI, 198 antecedents, and 36 theoretical perspectives used to explain user adoption of AI. The most widely used theoretical perspectives in this area of research are the technology acceptance model (TAM) and the unified theory of acceptance and use of technology (UTAUT). Meanwhile, perceived usefulness, perceived ease of use, and trust are the most studied antecedents. Finally, we discuss the implications of these findings for future research on AI adoption by consumers.
  • Item
    Claim success, but blame the bot? User reactions to service failure and recovery in interactions with humanoid service robots
    ( 2022-01-04) Mozafari, Nika ; Schwede, Melanie ; Hammerschmidt, Maik ; Weiger, Welf H.
    Service robots are changing the nature of service delivery in the digital economy. However, frequently occurring service failures represent a great challenge to achieve service robot acceptance. To understand how different service outcomes in interactions with service robots affect usage intentions, this research investigates (1) how users attribute failures committed by humanoid service robots and (2) whether responsibility attribution varies depending on service robot design. In a 3 (success vs. failure vs. failure with recovery) ✕ 2 (warm vs. competent service robot design) between-subject online experiment, this research finds evidence for the self-serving bias in a service robot context, that is, attributing successes to oneself, but blaming others for failures. This effect emerges independently from service robot design. Furthermore, recovery through human intervention can mitigate consequences of failure only for robots with warm design. The authors discuss consequences for applications of humanoid service robots and implications for further research.