Schanke, ScottBurtch, GordonRay, Gautam2021-12-242021-12-242022-01-04978-0-9981331-5-7http://hdl.handle.net/10125/79875Platforms today are experimenting with many novel personalization technologies. We explore one such technology here, voice-based conversational agents, with a focus on consumer trust. We consider the joint role of two key design / implementation choices, namely i) disclosing an agent’s autonomous nature to the user, and ii) aesthetic personalization, in the form of user voice cloning. We report on a set of controlled experiments based on the investment game, evaluating how these design choices affect subjects’ willingness to participate in the game against an autonomous, AI-enabled partner. We find no evidence that disclosure affects trust. However, we find that the greatest level of trust is elicited when a voice-based agent employs a clone of the subject’s voice. Mechanism explorations based on post-experiment survey responses indicate that voice-cloning induces trust by eliciting a perception of homophily; the voice-clone induces subjects to personify the agent and picture it as demographically similar.10 pagesengAttribution-NonCommercial-NoDerivatives 4.0 InternationalCrowd-based Platformsdeep fakedeep learningpersonalizationvoice cloneDynamic Voice Clones Elicit Consumer Trusttext10.24251/HICSS.2022.538