Cross-Lingual Knowledge Distillation: Enhancing Bilingual Models with Multilingual Pretraining
Main Article Content
Abstract
In this paper, we explore the concept of Cross-Lingual Knowledge Distillation (CLKD) and its effectiveness in enhancing bilingual models through multilingual pretraining. We propose a novel framework for CLKD that leverages pre-trained multilingual models to distill knowledge into bilingual models, improving their performance on cross-lingual tasks. Our approach addresses the challenges of limited bilingual training data and demonstrates significant improvements in various natural language processing (NLP) tasks.
Downloads
Download data is not yet available.
Article Details
How to Cite
Cross-Lingual Knowledge Distillation: Enhancing Bilingual Models with Multilingual Pretraining. (2024). Innovative Computer Sciences Journal, 10(1). https://innovatesci-publishers.com/index.php/ICSJ/article/view/216
Issue
Section
Articles
This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
Cross-Lingual Knowledge Distillation: Enhancing Bilingual Models with Multilingual Pretraining. (2024). Innovative Computer Sciences Journal, 10(1). https://innovatesci-publishers.com/index.php/ICSJ/article/view/216