Deconstructing Review Deception: A Study on Counterfactual Explanation and XAI in Detecting Fake and GPT-Generated Reviews

dc.contributor.author Chernyaeva, Olga
dc.contributor.author Hong, Taeho
dc.contributor.author Lee, One-Ki Daniel
dc.date.accessioned 2023-12-26T18:36:10Z
dc.date.available 2023-12-26T18:36:10Z
dc.date.issued 2024-01-03
dc.identifier.isbn 978-0-9981331-7-1
dc.identifier.other aa4185cc-4a20-4f45-affc-3714a3e40092
dc.identifier.uri https://hdl.handle.net/10125/106431
dc.language.iso eng
dc.relation.ispartof Proceedings of the 57th Hawaii International Conference on System Sciences
dc.rights Attribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject Conversational AI and Ethical Issues
dc.subject counterfactual explanation
dc.subject fake review detection
dc.subject generated reviews
dc.subject gpt
dc.subject xai
dc.title Deconstructing Review Deception: A Study on Counterfactual Explanation and XAI in Detecting Fake and GPT-Generated Reviews
dc.type Conference Paper
dc.type.dcmi Text
dcterms.abstract Our models not only deliver high-performing predictions but also illuminate the decision-making processes underlying these predictions. By experimenting with five datasets, we have showcased our framework's prowess in generating diverse and specific counterfactuals, thereby enhancing deception detection capabilities and supporting review authenticity assessments. The results demonstrate the significant contribution of our research in furthering the understanding of AI-generated review detection and, more broadly, AI interpretability. Experimentation on five datasets reveals our framework's ability to produce diverse and specific counterfactuals, significantly enriching deception detection capabilities and facilitating the evaluation of review authenticity. Our robust model offers a novel contribution to the understanding of AI applications, marking a significant step forward in both the detection of deceptive reviews and the broader field of AI interpretability.
dcterms.extent 10 pages
prism.startingpage 467
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
0046.pdf
Size:
654.34 KB
Format:
Adobe Portable Document Format
Description: