From Prompts to Probes: How Large Language Models Improve Response Quality in Open-Ended Survey Research
Loading...
Files
Date
Contributor
Advisor
Editor
Performer
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Interviewee
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Journal Name
Volume
Number/Issue
Starting Page
4720
Ending Page
Alternative Title
Abstract
Probing (i.e., asking follow-up questions to elicit elaboration) is a common method in qualitative research. While effective in human interviews, its benefits in AI-led surveys operated by chatbots remain underexplored. This paper investigates whether follow-up questions generated by large language models (LLMs) can improve the quality of open-ended survey responses. In a between-subjects experiment (N = 151), we compared different probing strategies and measured response quality by word count and thematic richness. Contextual probing significantly increased both response length and thematic richness. These findings indicate that LLMs can emulate key techniques of qualitative interviewing, enabling richer and more informative responses in online surveys. This positions LLM-driven probing as a scalable way to enhance data quality, bridging the gap between automation and qualitative depth. The study contributes to conversational AI research by showing how real-time adaptation fosters user elaboration, and offers practical guidance for integrating LLMs into surveys requiring nuanced input.
Description
Citation
DOI
Extent
10 pages
Format
Type
Conference Paper
Geographic Location
Time Period
Related To
Proceedings of the 59th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Catalog Record
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.
