Comparing Methods for Mitigating Gender Bias in Word Embedding
dc.contributor.author | Ronchieri, Elisabetta | |
dc.contributor.author | Biagi, Clara | |
dc.date.accessioned | 2022-12-27T18:55:01Z | |
dc.date.available | 2022-12-27T18:55:01Z | |
dc.date.issued | 2023-01-03 | |
dc.description.abstract | Word embedding captures the semantic and syntactic meaning of words into dense vectors. It contains biases learning from data that include constructs, cultural stereotypes, and inequalities of the society. Many methods for removing bias in traditional word embedding have been proposed. In this study we use the original GloVe word embedding and perform a comparison among debiasing methods built on top of GloVe in order to determine which methods perform the best removing bias. We have defined half-sibling regression, repulsion attraction neutralization GloVe method and compared it with gender-preserving, gender-neutral GloVe method and other debiased methods. According to our results, no methods outperform in all the analyses and completely remove gender information from gender neutral words. Furthermore, all the debiasing methods perform better than the original GloVe. | |
dc.format.extent | 10 | |
dc.identifier.doi | 10.24251/HICSS.2023.091 | |
dc.identifier.isbn | 978-0-9981331-6-4 | |
dc.identifier.uri | https://hdl.handle.net/10125/102720 | |
dc.language.iso | eng | |
dc.relation.ispartof | Proceedings of the 56th Hawaii International Conference on System Sciences | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.subject | Accountability, Evaluation, and Obscurity of AI Algorithms | |
dc.subject | gender bias | |
dc.subject | glove | |
dc.subject | natural language processing | |
dc.subject | word embedding | |
dc.title | Comparing Methods for Mitigating Gender Bias in Word Embedding | |
dc.type.dcmi | text | |
prism.startingpage | 722 |
Files
Original bundle
1 - 1 of 1