Evaluating the Performance of Transformer Models in Machine Translation

Main Article Content

Katya Ivanova

Abstract

This paper provides a comprehensive evaluation of transformer models in MT, focusing on their performance across various language pairs, domains, and resource levels. Key metrics such as BLEU (Bilingual Evaluation Understudy) scores, TER (Translation Edit Rate), and human evaluations are utilized to assess translation accuracy, fluency, and adequacy. The study explores the strengths of transformer models in handling complex linguistic structures and their ability to generalize across different languages. It also examines challenges such as domain mismatch and language divergence, highlighting the need for fine-tuning and domain adaptation techniques to address these issues. Furthermore, the paper discusses the impact of data efficiency and transfer learning on the performance of transformer models, particularly for low-resource languages. Results indicate that transformer models consistently outperform traditional MT approaches, offering superior translation quality and robustness. However, they require substantial computational resources and careful tuning to achieve optimal performance. The findings underscore the importance of nuanced evaluation metrics and adaptive strategies in leveraging the full potential of transformer models for machine translation.

Downloads

Download data is not yet available.

Article Details

How to Cite
Evaluating the Performance of Transformer Models in Machine Translation. (2024). Innovative Computer Sciences Journal, 10(1), 1−7. https://innovatesci-publishers.com/index.php/ICSJ/article/view/102
Section
Articles

How to Cite

Evaluating the Performance of Transformer Models in Machine Translation. (2024). Innovative Computer Sciences Journal, 10(1), 1−7. https://innovatesci-publishers.com/index.php/ICSJ/article/view/102