Deconstructing Review Deception: A Study on Counterfactual Explanation and XAI in Detecting Fake and GPT-Generated Reviews

dc.contributor.authorChernyaeva, Olga
dc.contributor.authorHong, Taeho
dc.contributor.authorLee, One-Ki Daniel
dc.date.accessioned2023-12-26T18:36:10Z
dc.date.available2023-12-26T18:36:10Z
dc.date.issued2024-01-03
dc.identifier.doi10.24251/HICSS.2024.056
dc.identifier.isbn978-0-9981331-7-1
dc.identifier.otheraa4185cc-4a20-4f45-affc-3714a3e40092
dc.identifier.urihttps://hdl.handle.net/10125/106431
dc.language.isoeng
dc.relation.ispartofProceedings of the 57th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectConversational AI and Ethical Issues
dc.subjectcounterfactual explanation
dc.subjectfake review detection
dc.subjectgenerated reviews
dc.subjectgpt
dc.subjectxai
dc.titleDeconstructing Review Deception: A Study on Counterfactual Explanation and XAI in Detecting Fake and GPT-Generated Reviews
dc.typeConference Paper
dc.type.dcmiText
dcterms.abstractOur models not only deliver high-performing predictions but also illuminate the decision-making processes underlying these predictions. By experimenting with five datasets, we have showcased our framework's prowess in generating diverse and specific counterfactuals, thereby enhancing deception detection capabilities and supporting review authenticity assessments. The results demonstrate the significant contribution of our research in furthering the understanding of AI-generated review detection and, more broadly, AI interpretability. Experimentation on five datasets reveals our framework's ability to produce diverse and specific counterfactuals, significantly enriching deception detection capabilities and facilitating the evaluation of review authenticity. Our robust model offers a novel contribution to the understanding of AI applications, marking a significant step forward in both the detection of deceptive reviews and the broader field of AI interpretability.
dcterms.extent10 pages
prism.startingpage467

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0046.pdf
Size:
654.34 KB
Format:
Adobe Portable Document Format