Please use this identifier to cite or link to this item:
Feature enrichment through multi-gram models
|Title:||Feature enrichment through multi-gram models|
|Keywords:||Big Data and Analytics: Pathways to Maturity|
feature enrichment, feature selection, multi gram, n-gram, text classifications
|Issue Date:||03 Jan 2018|
|Abstract:||We introduce a feature enrichment approach, by developing multi-gram cosine similarity classification models. Our approach combines cosine similarity features of different N-gram word models, and unsupervised sentiment features, into models with a richer feature set than any of the approaches alone can provide. We test the classification models using different machine learning algorithms on categories of hateful and violent web content, and show that our multi-gram models give across-the-board performance improvements, for all categories tested, compared to combinations of baseline unigram, N-gram, and sentiment classification models. Our multi-gram models perform significantly better on highly imbalanced sets than the comparison methods, while this enrichment approach leaves room for further improvements, by adding instead of exhausting optimization options.|
|Rights:||Attribution-NonCommercial-NoDerivatives 4.0 International|
|Appears in Collections:||Big Data and Analytics: Pathways to Maturity|
Please contact email@example.com if you need this content in an alternative format.
Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.