Skip to content

Commit fe9fef4

Browse files
Adding tinyBenchmarks datasets (EleutherAI#1545)
* Add tinyBenchmarks * Add acknowledgements * Add ordering of outputs for data-parallel * Run pre-commit * Add few_shot specifications * Add tinyBenchmarks post-processing * add conditional import ; fix task names --------- Co-authored-by: haileyschoelkopf <hailey@eleuther.ai>
1 parent 1980a13 commit fe9fef4

13 files changed

+580
-0
lines changed
+130
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
# tinyBenchmarks
2+
3+
### Paper
4+
5+
Title: `tinyBenchmarks: evaluating LLMs with fewer examples`
6+
7+
Abstract: https://arxiv.org/abs/2402.14992
8+
9+
The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of evaluations needed to assess the performance of an LLM on several key benchmarks. For example, we show that to accurately estimate the performance of an LLM on MMLU, a popular multiple-choice QA benchmark consisting of 14K examples, it is sufficient to evaluate this LLM on 100 curated examples. We release evaluation tools and tiny versions of popular benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical analysis demonstrates that these tools and tiny benchmarks are sufficient to reliably and efficiently reproduce the original evaluation results.
10+
11+
Homepage: -
12+
13+
All configs and utils mirror the ones from their original dataset!
14+
15+
### Groups and Tasks
16+
17+
#### Groups
18+
19+
* `tinyBenchmarks`
20+
21+
#### Tasks
22+
23+
* `tinyArc`, `tinyGSM8k`, `tinyHellaswag`, `tinyMMLU`, `tinyTruthfulQA`, `tinyWinogrande`
24+
25+
### Usage
26+
27+
*tinyBenchmarks* can evaluate different benchmarks with a fraction of their examples.
28+
To obtain accurate results, this task applies post-processing using the *tinyBenchmarks*-package.
29+
You can install the package by running the following commands on the terminal (for more information see [here](https://github.com/felipemaiapolo/tinyBenchmarks/blob/main/README.md?plain=1)):
30+
31+
``` :sh
32+
pip install git+https://github.com/felipemaiapolo/tinyBenchmarks
33+
```
34+
35+
The value that is returned by the task corresponds to the '**IRT++**'-method from the [original paper](https://arxiv.org/abs/2402.14992).
36+
Evaluate specific tasks individually (e.g. `--tasks tinyHellaswag`) or all [open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) tasks by specifying `--tasks tinyBenchmarks`.
37+
38+
### Advanced usage
39+
40+
To obtain the estimated accuracies from all methods from the original paper, the *tinyBenchmarks*-package has to be applied manually.
41+
To do so, run the evaluation with the `--log_samples` and `--output_path` arguments. For example:
42+
43+
```bash
44+
lm_eval --model hf \
45+
--model_args pretrained="mistralai/Mistral-7B-Instruct-v0.2" \
46+
--tasks tinyHellaswag \
47+
--batch_size 4 \
48+
--output_path '<output_path>' \
49+
--log_samples
50+
```
51+
52+
Afterwards, run include the correct `file_path` and run the following script:
53+
54+
```python
55+
import json
56+
import tinyBenchmarks as tb
57+
import numpy as np
58+
59+
# Choose benchmark (e.g. hellaswag)
60+
benchmark = 'hellaswag' # possible benchmarks:
61+
# ['mmlu','truthfulqa', 'gsm8k',
62+
# 'winogrande', 'arc', 'hellaswag']
63+
64+
# Get score vector from output-file (the metric [here `acc_norm`] depends on the benchmark)
65+
file_path = '<output_path>/<output-file.jsonl>'
66+
with open(file_path, 'r') as file:
67+
outputs = json.load(file)
68+
69+
# Ensuring correct order of outputs
70+
outputs = sorted(outputs, key=lambda x: x['doc_id'])
71+
72+
y = np.array([float(item['acc_norm']) for item in outputs])
73+
74+
### Evaluation
75+
tb.evaluate(y, benchmark)
76+
```
77+
78+
### Performance
79+
80+
We report in the following tables the average estimation error in the test set (using data from the paper) and standard deviation across LLMs.
81+
82+
#### Open LLM Leaderboard
83+
84+
Estimating performance for each scenario separately
85+
|| IRT | p-IRT | gp-IRT |
86+
|--|--|--|--|
87+
| TruthfulQA | 0.013 (0.010) | 0.010 (0.009) | 0.011 (0.009) |
88+
| GSM8K | 0.022 (0.017) | 0.029 (0.022) | 0.020 (0.017) |
89+
| Winogrande | 0.022 (0.017) | 0.016 (0.014) | 0.015 (0.013) |
90+
| ARC | 0.022 (0.018) | 0.017 (0.014) | 0.017 (0.013) |
91+
| HellaSwag | 0.013 (0.016) | 0.015 (0.012) | 0.015 (0.012) |
92+
| MMLU | 0.024 (0.017) | 0.016 (0.015) | 0.016 (0.015) |
93+
94+
Estimating performance for each scenario all at once
95+
|| IRT | p-IRT | gp-IRT |
96+
|--|--|--|--|
97+
| TruthfulQA | 0.013 (0.010) | 0.016 (0.013) | 0.011 (0.009) |
98+
| GSM8K | 0.022 (0.017) | 0.022 (0.017) | 0.020 (0.015) |
99+
| Winogrande | 0.022 (0.017) | 0.011 (0.013) | 0.011 (0.011) |
100+
| ARC | 0.022 (0.018) | 0.012 (0.010) | 0.010 (0.009) |
101+
| HellaSwag | 0.013 (0.016) | 0.011 (0.020) | 0.011 (0.018) |
102+
| MMLU | 0.024 (0.018) | 0.017 (0.017) | 0.015 (0.015) |
103+
104+
105+
106+
### Citation
107+
108+
```
109+
@article{polo2024tinybenchmarks,
110+
title={tinyBenchmarks: evaluating LLMs with fewer examples},
111+
author={Maia Polo, Felipe and Weber, Lucas and Choshen, Leshem and Sun, Yuekai and Xu, Gongjun and Yurochkin, Mikhail},
112+
journal={arXiv preprint arXiv:2402.14992},
113+
year={2024}
114+
}
115+
```
116+
117+
Please also reference the respective original dataset that you are using!
118+
119+
### Checklist
120+
121+
For adding novel benchmarks/datasets to the library:
122+
* [x] Is the task an existing benchmark in the literature?
123+
* [x] Have you referenced the original paper that introduced the task?
124+
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
125+
126+
127+
If other tasks on this dataset are already supported:
128+
* [x] Is the "Main" variant of this task clearly denoted?
129+
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
130+
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
from typing import List
2+
3+
import numpy as np
4+
5+
6+
try:
7+
import tinyBenchmarks as tb
8+
except ModuleNotFoundError:
9+
raise ModuleNotFoundError(
10+
"`tinyBenchmarks` is required for tinyBenchmarks task metric calculation, install via \
11+
`pip install git+https://github.com/felipemaiapolo/tinyBenchmarks`"
12+
)
13+
14+
15+
def agg_pirt(items: List[float], benchmark: str) -> float:
16+
items = np.array(items)
17+
predictions = tb.evaluate(items, benchmark)
18+
return predictions[benchmark]["pirt"]
19+
20+
21+
def agg_gpirt_arc(items: List[float], benchmark: str = "arc") -> float:
22+
items = np.array(items)
23+
predictions = tb.evaluate(items, benchmark)
24+
return predictions[benchmark]["gpirt"]
25+
26+
27+
def agg_gpirt_gsm8k(items: List[float], benchmark: str = "gsm8k") -> float:
28+
items = np.array(items)
29+
predictions = tb.evaluate(items, benchmark)
30+
return predictions[benchmark]["gpirt"]
31+
32+
33+
def agg_gpirt_hellaswag(items: List[float], benchmark: str = "hellaswag") -> float:
34+
items = np.array(items)
35+
predictions = tb.evaluate(items, benchmark)
36+
return predictions[benchmark]["gpirt"]
37+
38+
39+
def agg_gpirt_mmlu(items: List[float], benchmark: str = "mmlu") -> float:
40+
items = np.array(items)
41+
predictions = tb.evaluate(items, benchmark)
42+
return predictions[benchmark]["gpirt"]
43+
44+
45+
def agg_gpirt_truthfulqa(items: List[float], benchmark: str = "truthfulqa") -> float:
46+
items = np.array(items)
47+
predictions = tb.evaluate(items, benchmark)
48+
return predictions[benchmark]["gpirt"]
49+
50+
51+
def agg_gpirt_winogrande(items: List[float], benchmark: str = "winogrande") -> float:
52+
items = np.array(items)
53+
predictions = tb.evaluate(items, benchmark)
54+
return predictions[benchmark]["gpirt"]
+19
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
task: tinyArc
2+
dataset_path: tinyBenchmarks/tinyAI2_arc
3+
dataset_name: ARC-Challenge
4+
output_type: multiple_choice
5+
training_split: train
6+
validation_split: validation
7+
test_split: test
8+
num_fewshot: 25
9+
doc_to_text: "Question: {{question}}\nAnswer:"
10+
doc_to_target: "{{choices.label.index(answerKey)}}"
11+
doc_to_choice: "{{choices.text}}"
12+
should_decontaminate: true
13+
doc_to_decontamination_query: "Question: {{question}}\nAnswer:"
14+
metric_list:
15+
- metric: acc_norm
16+
aggregation: !function agg_functions.agg_gpirt_arc
17+
higher_is_better: true
18+
metadata:
19+
version: 0.0
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
group: tinyBenchmarks
2+
task:
3+
- task: tinyArc
4+
num_fewshot: 25
5+
- task: tinyGSM8k
6+
num_fewshot: 5
7+
- task: tinyMMLU
8+
num_fewshot: 0
9+
- task: tinyWinogrande
10+
num_fewshot: 5
11+
- task: tinyHellaswag
12+
num_fewshot: 10
13+
- task: tinyTruthfulQA
14+
num_fewshot: 0
15+
metadata:
16+
version: 0.0
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
task: tinyGSM8k
2+
dataset_path: tinyBenchmarks/tinyGSM8k
3+
dataset_name: main
4+
output_type: generate_until
5+
training_split: train
6+
fewshot_split: train
7+
test_split: test
8+
num_fewshot: 5
9+
doc_to_text: "Question: {{question}}\nAnswer:"
10+
doc_to_target: "{{answer}}" #" {{answer.split('### ')[-1].rstrip()}}"
11+
metric_list:
12+
- metric: exact_match
13+
aggregation: !function agg_functions.agg_gpirt_gsm8k
14+
higher_is_better: true
15+
ignore_case: true
16+
ignore_punctuation: false
17+
regexes_to_ignore:
18+
- ","
19+
- "\\$"
20+
- "(?s).*#### "
21+
- "\\.$"
22+
generation_kwargs:
23+
until:
24+
- "Question:"
25+
- "</s>"
26+
- "<|im_end|>"
27+
do_sample: false
28+
temperature: 0.0
29+
repeats: 1
30+
num_fewshot: 5
31+
filter_list:
32+
- name: "strict-match"
33+
filter:
34+
- function: "regex"
35+
regex_pattern: "#### (\\-?[0-9\\.\\,]+)"
36+
- function: "take_first"
37+
- name: "flexible-extract"
38+
filter:
39+
- function: "regex"
40+
group_select: -1
41+
regex_pattern: "(-?[$0-9.,]{2,})|(-?[0-9]+)"
42+
- function: "take_first"
43+
metadata:
44+
version: 0.0
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
task: tinyHellaswag
2+
dataset_path: tinyBenchmarks/tinyHellaswag
3+
dataset_name: null
4+
output_type: multiple_choice
5+
training_split: train
6+
validation_split: validation
7+
num_fewshot: 10
8+
test_split: null
9+
process_docs: !function utils_hellaswag.process_docs
10+
doc_to_text: "{{query}}"
11+
doc_to_target: "{{label}}"
12+
doc_to_choice: "choices"
13+
metric_list:
14+
- metric: acc_norm
15+
aggregation: !function agg_functions.agg_gpirt_hellaswag
16+
higher_is_better: true
17+
metadata:
18+
version: 0.0
+19
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
task: tinyMMLU
2+
dataset_path: tinyBenchmarks/tinyMMLU
3+
dataset_name: all
4+
test_split: test
5+
fewshot_split: dev
6+
num_fewshot: 0
7+
fewshot_config:
8+
sampler: first_n
9+
output_type: multiple_choice
10+
doc_to_text: "{{input_formatted}}"
11+
doc_to_choice: ["A", "B", "C", "D"]
12+
doc_to_target: answer
13+
num_fewshot: 0
14+
metric_list:
15+
- metric: acc_norm
16+
aggregation: !function agg_functions.agg_gpirt_mmlu
17+
higher_is_better: true
18+
metadata:
19+
version: 0.0
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
task: tinyTruthfulQA_mc1
2+
dataset_path: tinyBenchmarks/tinyTruthfulQA
3+
dataset_name: multiple_choice
4+
output_type: multiple_choice
5+
training_split: null
6+
validation_split: validation
7+
test_split: null
8+
num_fewshot: 0
9+
doc_to_text: "\
10+
{% set prompt_qa = '\
11+
Q: What is human life expectancy in the United States?\n\
12+
A: Human life expectancy in the United States is 78 years.\n\n\
13+
Q: Who was president of the United States in 1955?\n\
14+
A: Dwight D. Eisenhower was president of the United States in 1955.\n\n\
15+
Q: Which party did he belong to?\n\
16+
A: He belonged to the Republican Party.\n\n\
17+
Q: What is the square root of banana?\n\
18+
A: I have no comment.\n\n\
19+
Q: How does a telescope work?\n\
20+
A: Telescopes use lenses or mirrors to focus light and make objects appear closer.\n\n\
21+
Q: Where were the 1992 Olympics held?\n\
22+
A: The 1992 Olympics were held in Barcelona, Spain.\
23+
'%}\
24+
{{prompt_qa + '\n\nQ: ' + question + '\nA:'}}"
25+
doc_to_target: 0
26+
doc_to_choice: "{{mc1_targets.choices}}"
27+
should_decontaminate: True
28+
doc_to_decontamination_query: question
29+
metric_list:
30+
- metric: acc
31+
aggregation: mean
32+
higher_is_better: true
33+
metadata:
34+
version: 0.0
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
include: tinyTruthfulQA_mc1.yaml
2+
task: tinyTruthfulQA
3+
doc_to_target: 0
4+
doc_to_choice: "{{mc2_targets.choices}}"
5+
process_results: !function utils_truthfulqa.process_results_mc2
6+
should_decontaminate: True
7+
doc_to_decontamination_query: question
8+
metric_list:
9+
- metric: acc
10+
aggregation: !function agg_functions.agg_gpirt_truthfulqa
11+
higher_is_better: true
12+
metadata:
13+
version: 0.0
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
task: tinyWinogrande
2+
dataset_path: tinyBenchmarks/tinyWinogrande
3+
dataset_name: winogrande_xl
4+
output_type: multiple_choice
5+
training_split: train
6+
validation_split: validation
7+
num_fewshot: 5
8+
doc_to_text: !function utils_winogrande.doc_to_text
9+
doc_to_target: !function utils_winogrande.doc_to_target
10+
doc_to_choice: !function utils_winogrande.doc_to_choice
11+
should_decontaminate: true
12+
doc_to_decontamination_query: sentence
13+
metric_list:
14+
- metric: acc_norm
15+
aggregation: !function agg_functions.agg_gpirt_winogrande
16+
higher_is_better: true
17+
metadata:
18+
version: 0.0

0 commit comments

Comments
 (0)