GPT in the Loop: Evidence from the Field.

dc.contributor.authorYang, Cathy
dc.contributor.authorAllen, Leo
dc.contributor.authorRestrepo-Amariles, David
dc.contributor.authorTroussel, Aurore
dc.date.accessioned2023-12-26T18:43:46Z
dc.date.available2023-12-26T18:43:46Z
dc.date.issued2024-01-03
dc.identifier.isbn978-0-9981331-7-1
dc.identifier.other46981592-79a7-4b9b-8121-1f46ab306152
dc.identifier.urihttps://hdl.handle.net/10125/106907
dc.language.isoeng
dc.relation.ispartofProceedings of the 57th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectEconomic and Societal Impacts of Technology, Data, and Algorithms
dc.subjectcontent evaluation
dc.subjectexperiment
dc.subjectgpt disclosure
dc.subjecthuman-gpt collaboration
dc.titleGPT in the Loop: Evidence from the Field.
dc.typeConference Paper
dc.type.dcmiText
dcterms.abstractGenerative Pre-trained Transformers (GPTs) are highly effective in generating content and increasing productivity, but companies have reservations about their use in a professional setting. OpenAI and policymakers suggest that disclosing the use of GPT is necessary, but there is little empirical evidence to understand its consequence. Our experiment found that managers from a leading consulting firm were unable to distinguish Human-GPT generated content when the content generation source was not disclosed and disclosing the use of GPT improved the content's evaluation. We explored the effects of applying the GPT disclosure policy in the workplace. Managers prefer analysts to disclose their use of GPT, but their preferences regarding how junior analysts should use GPT may differ from that of the analysts, leading to potential conflicts over disclosure.
dcterms.extent10 pages
prism.startingpage4343

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
0428.pdf
Size:
793.3 KB
Format:
Adobe Portable Document Format