LLaMA: Open and Efficient Foundation Language Models
Abstract
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2023
- DOI:
- arXiv:
- arXiv:2302.13971
- Bibcode:
- 2023arXiv230213971T
- Keywords:
-
- Computer Science - Computation and Language