Lewandowski, TomPoser, MathisKučević, EmirHeuer, MarvinHellmich, JannisRaykhlin, MichaelBlum, StefanBöhmann, Tilo2022-12-272022-12-272023-01-03978-0-9981331-6-4https://hdl.handle.net/10125/103055Contemporary organizations are increasingly adopting conversational agents (CAs) as intelligent and natural language-based solutions for providing services and information. CAs promote new forms of personalization, speed, cost-effectiveness, and automation. However, despite their hype in research and practice, organizations fail to sustain CAs in operations. They struggle to leverage CAs’ potential because they lack knowledge on how to evaluate and improve the quality of CAs throughout their lifecycle. We build on this research gap by conducting a design science research (DSR) project, aggregating insights from the literature and practice to derive a validated set of quality criteria for CAs. Our study contributes to CA research and guides practitioners by providing a blueprint to structure the evaluation of CAs to discover areas for systematic improvement.10engAttribution-NonCommercial-NoDerivatives 4.0 InternationalArtificial Intelligence-based Assistantsartificial intelligence assistantschatbotsconversational agentsdesign science research (dsr)quality criteria setLeveraging the Potential of Conversational Agents: Quality Criteria for the Continuous Evaluation and Improvementtext10.24251/HICSS.2023.424