$\texttt{metabench}$  A Sparse Benchmark to Measure General Ability in Large Language Models
Abstract
Large Language Models (LLMs) vary in their abilities on a range of tasks. Initiatives such as the $\texttt{Open LLM Leaderboard}$ aim to quantify these differences with several large benchmarks (sets of test items to which an LLM can respond either correctly or incorrectly). However, high correlations within and between benchmark scores suggest that (1) there exists a small set of common underlying abilities that these benchmarks measure, and (2) items tap into redundant information and the benchmarks may thus be considerably compressed. We use data from $n > 5000$ LLMs to identify the most informative items of six benchmarks, ARC, GSM8K, HellaSwag, MMLU, TruthfulQA and WinoGrande (with $d=28,632$ items in total). From them we distill a sparse benchmark, $\texttt{metabench}$, that has less than $3\%$ of the original size of all six benchmarks combined. This new sparse benchmark goes beyond point scores by yielding estimators of the underlying benchmarkspecific abilities. We show that these estimators (1) can be used to reconstruct each original $\textit{individual}$ benchmark score with, on average, $1.5\%$ root mean square error (RMSE), (2) reconstruct the original $\textit{total}$ score with $0.8\%$ RMSE, and (3) have a single underlying common factor whose Spearman correlation with the total score is $r = 0.93$.
 Publication:

arXiv eprints
 Pub Date:
 July 2024
 DOI:
 10.48550/arXiv.2407.12844
 arXiv:
 arXiv:2407.12844
 Bibcode:
 2024arXiv240712844K
 Keywords:

 Computer Science  Computation and Language;
 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 LLMs, benchmarking, IRT, information, compression