On Leakage of Code Generation Evaluation Datasets
Abstract
In this paper, we consider contamination by code generation test sets, in particular in their use in modern large language models. We discuss three possible sources of such contamination and show findings supporting each of them: (i) direct data leakage, (ii) indirect data leakage through the use of synthetic data and (iii) overfitting to evaluation sets during model selection. To address this, we release Less Basic Python Problems (LBPP): an uncontaminated new benchmark of 161 prompts with their associated Python solutions. LBPP is released at https://huggingface.co/datasets/CohereForAI/lbpp .
- Publication:
-
arXiv e-prints
- Pub Date:
- July 2024
- DOI:
- 10.48550/arXiv.2407.07565
- arXiv:
- arXiv:2407.07565
- Bibcode:
- 2024arXiv240707565M
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- EMNLP 2024 Findings. 5 main pages, 9 in total