MergeKD: An Empirical Framework for Combining Knowledge Distillation with Model Fusion Using BERT Model
Files
Date
2025-01-07
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
7285
Ending Page
Alternative Title
Abstract
BERT-based models have become the mainstream in sentiment classification approaches. However, due to the divergence of the text domains, each domain requires a specific fine-tuned model which is often impractical for scaling. Additionally, the large model's size requires heavy computation resources. In our work, we propose a framework which could address such issues by combining the domain adaptation task with a lightweight model distillation. From each trained model of a specific domain, a merged model is created by fusing all models without the need to finetune on a combined dataset. Consequentially, the resulting model is distilled into a smaller model to lower the required computation. We test our framework on semantic classification with Vietnamese datasets with a pre-trained BERT-based architecture. The results highlight that our merged model achieves the highest average accuracy overall substantially while the distilled model maintains a competitive performance with a 50\%{} reduction in inference time.
Description
Keywords
Intelligent Edge Computing, bert, distillation, entiment analysis, model fusion
Citation
Extent
10
Format
Geographic Location
Time Period
Related To
Proceedings of the 58th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Collections
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.