Artificial Intelligence-based Assistants
Permanent URI for this collection
1 - 5 of 5
ItemCan Chatbots Be Persuasive? How to Boost the Effectiveness of Chatbot Recommendations for Increasing Purchase Intention( 2023-01-03)Firms increasingly invest in chatbots that provide purchase recommendations. However, customers often reject recommendations by chatbots because they find neither the contents of the recommendation (message-level persuasiveness) nor the chatbot itself (source-level persuasiveness) persuasive. To overcome these barriers and increase purchase intention, this study examines how the content of recommendation messages should be designed and which communication style the chatbot should use to provide recommendation messages. Results of a 2 (two-sided vs. one-sided recommendation message) ✕ 3 (warm vs. competent vs. neutral communication style) between-subject online experiment show that a two-sided recommendation message increases purchase intention, but only for chatbots using a warm or competent communication style. Whereas a warm chatbot leads to higher purchase intentions of a recommendation through promoting its source persuasiveness, a competent chatbot increases recommendation effectiveness by promoting message persuasiveness. Therefore, firms should refine a chatbot’s communication style for providing recommendations that persuade customers to purchase.
ItemLeveraging the Potential of Conversational Agents: Quality Criteria for the Continuous Evaluation and Improvement( 2023-01-03)Contemporary organizations are increasingly adopting conversational agents (CAs) as intelligent and natural language-based solutions for providing services and information. CAs promote new forms of personalization, speed, cost-effectiveness, and automation. However, despite their hype in research and practice, organizations fail to sustain CAs in operations. They struggle to leverage CAs’ potential because they lack knowledge on how to evaluate and improve the quality of CAs throughout their lifecycle. We build on this research gap by conducting a design science research (DSR) project, aggregating insights from the literature and practice to derive a validated set of quality criteria for CAs. Our study contributes to CA research and guides practitioners by providing a blueprint to structure the evaluation of CAs to discover areas for systematic improvement.
ItemMeasuring Ecosystem Complexity - Decision-Making Based on Complementarity Graphs( 2023-01-03)Platforms feature increasingly complex architectures with regard to interconnecting with other digital platforms as well as with a variety of devices and services. This development also impacts the structure of digital platform ecosystems and forces providers of these services, devices, and services to incorporate this complexity in their decision-making. To contribute to the existing body of knowledge on measuring ecosystem complexity, the present research proposes two key artefacts based on ecosystem intelligence: On the one hand, complementarity graphs represent ecosystems with an ecosystem's functional modules as vertices and complementarities as edges. The nodes carry information about the category membership of the module. On the other hand, a process is suggested that can collect important information for ecosystem intelligence using proxies and web scraping. Our approach allows replacing data, which today is largely unavailable due to competitive reasons. We demonstrated the use of the artefacts in category-oriented complementarity maps that aggregate the information from complementarity graphs and support decision-making. They show which combination of module categories creates strong and weak complementarities. The paper evaluates complementarity maps and the data collection process by creating category-oriented complementarity graphs on the Alexa skill ecosystem and concludes with a call to pursue more research based on functional ecosystem intelligence.
ItemMachine Learning in Transaction Monitoring: The Prospect of xAI( 2023-01-03)Banks hold a societal responsibility and regulatory requirements to mitigate the risk of financial crimes. Risk mitigation primarily happens through monitoring customer activity through Transaction Monitoring (TM). Recently, Machine Learning (ML) has been proposed to identify suspicious customer behavior, which raises complex socio-technical implications around trust and explainability of ML models and their outputs. However, little research is available due to its sensitivity. We aim to fill this gap by presenting empirical research exploring how ML supported automation and augmentation affects the TM process and stakeholders’ requirements for building eXplainable Artificial Intelligence (xAI). Our study finds that xAI requirements depend on the liable party in the TM process which changes depending on augmentation or automation of TM. Context-relatable explanations can provide much-needed support for auditing and may diminish bias in the investigator’s judgement. These results suggest a use case-specific approach for xAI to adequately foster the adoption of ML in TM.