The birth of Romanian BERT
Abstract
Large-scale pretrained language models have become ubiquitous in Natural Language Processing. However, most of these models are available either in high-resource languages, in particular English, or as multilingual models that compromise performance on individual languages for coverage. This paper introduces Romanian BERT, the first purely Romanian transformer-based language model, pretrained on a large text corpus. We discuss corpus composition and cleaning, the model training process, as well as an extensive evaluation of the model on various Romanian datasets. We open source not only the model itself, but also a repository that contains information on how to obtain the corpus, fine-tune and use this model in production (with practical examples), and how to fully replicate the evaluation process.
- Publication:
-
arXiv e-prints
- Pub Date:
- September 2020
- DOI:
- 10.48550/arXiv.2009.08712
- arXiv:
- arXiv:2009.08712
- Bibcode:
- 2020arXiv200908712D
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- 5 pages (4 + reference page), accepted in Findings of EMNLP 2020