BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
Abstract
We present BPEmb, a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages bet- ter than alternative subword approaches, while requiring vastly fewer resources and no tokenization. BPEmb is available at https://github.com/bheinzerling/bpemb
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2017
- DOI:
- 10.48550/arXiv.1710.02187
- arXiv:
- arXiv:1710.02187
- Bibcode:
- 2017arXiv171002187H
- Keywords:
-
- Computer Science - Computation and Language