Harnessing Large Language Models for Effective and Efficient Hate Speech Detection
dc.contributor.author | Svetasheva, Arina | |
dc.contributor.author | Lee, Keeheon | |
dc.date.accessioned | 2023-12-26T18:51:42Z | |
dc.date.available | 2023-12-26T18:51:42Z | |
dc.date.issued | 2024-01-03 | |
dc.identifier.doi | 10.24251/HICSS.2023.826 | |
dc.identifier.isbn | 978-0-9981331-7-1 | |
dc.identifier.other | 58600897-e631-4405-af75-99a20b121906 | |
dc.identifier.uri | https://hdl.handle.net/10125/107212 | |
dc.language.iso | eng | |
dc.relation.ispartof | Proceedings of the 57th Hawaii International Conference on System Sciences | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.subject | Artificial Intelligence and Digital Discrimination | |
dc.subject | hate speech detection — large language models — synthetic datasets — online toxicity | |
dc.title | Harnessing Large Language Models for Effective and Efficient Hate Speech Detection | |
dc.type | Conference Paper | |
dc.type.dcmi | Text | |
dcterms.abstract | Hate speech presents a growing concern within online communities, posing threats to marginalized groups and undermining ethical norms. Although automatic hate speech detection (AHSD) methods have shown promise, there is still room for improvement. Recent advancements in Language Model Pretraining, exemplified by the introduction of ChatGPT-4, bring forth new possibilities for enhancing classification. In this study, we propose leveraging synthetic data generation to improve hate speech detection. Our findings demonstrate the effectiveness and efficiency of this approach in rapidly improving model performance, particularly in scenarios where obtaining sufficient amounts of hate speech data is challenging. Through our experiments, we establish that Large Language Models (LLMs) can proficiently serve as both data generators and annotators in the desired format, exhibiting performance comparable to, and even surpassing, that of humans. Moreover, we validate the applicability of LLMs in domains characterized by complex and highly abbreviated lexicons, such as the gaming industry. | |
dcterms.extent | 10 pages | |
prism.startingpage | 6898 |
Files
Original bundle
1 - 1 of 1