|
| 1 | +# tinyBenchmarks |
| 2 | + |
| 3 | +### Paper |
| 4 | + |
| 5 | +Title: `tinyBenchmarks: evaluating LLMs with fewer examples` |
| 6 | + |
| 7 | +Abstract: https://arxiv.org/abs/2402.14992 |
| 8 | + |
| 9 | +The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of evaluations needed to assess the performance of an LLM on several key benchmarks. For example, we show that to accurately estimate the performance of an LLM on MMLU, a popular multiple-choice QA benchmark consisting of 14K examples, it is sufficient to evaluate this LLM on 100 curated examples. We release evaluation tools and tiny versions of popular benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical analysis demonstrates that these tools and tiny benchmarks are sufficient to reliably and efficiently reproduce the original evaluation results. |
| 10 | + |
| 11 | +Homepage: - |
| 12 | + |
| 13 | +All configs and utils mirror the ones from their original dataset! |
| 14 | + |
| 15 | +### Groups and Tasks |
| 16 | + |
| 17 | +#### Groups |
| 18 | + |
| 19 | +* `tinyBenchmarks` |
| 20 | + |
| 21 | +#### Tasks |
| 22 | + |
| 23 | +* `tinyArc`, `tinyGSM8k`, `tinyHellaswag`, `tinyMMLU`, `tinyTruthfulQA`, `tinyWinogrande` |
| 24 | + |
| 25 | +### Usage |
| 26 | + |
| 27 | +*tinyBenchmarks* can evaluate different benchmarks with a fraction of their examples. |
| 28 | +To obtain accurate results, this task applies post-processing using the *tinyBenchmarks*-package. |
| 29 | +You can install the package by running the following commands on the terminal (for more information see [here](https://github.com/felipemaiapolo/tinyBenchmarks/blob/main/README.md?plain=1)): |
| 30 | + |
| 31 | +``` :sh |
| 32 | +pip install git+https://github.com/felipemaiapolo/tinyBenchmarks |
| 33 | +``` |
| 34 | + |
| 35 | +The value that is returned by the task corresponds to the '**IRT++**'-method from the [original paper](https://arxiv.org/abs/2402.14992). |
| 36 | +Evaluate specific tasks individually (e.g. `--tasks tinyHellaswag`) or all [open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) tasks by specifying `--tasks tinyBenchmarks`. |
| 37 | + |
| 38 | +### Advanced usage |
| 39 | + |
| 40 | +To obtain the estimated accuracies from all methods from the original paper, the *tinyBenchmarks*-package has to be applied manually. |
| 41 | +To do so, run the evaluation with the `--log_samples` and `--output_path` arguments. For example: |
| 42 | + |
| 43 | +```bash |
| 44 | +lm_eval --model hf \ |
| 45 | + --model_args pretrained="mistralai/Mistral-7B-Instruct-v0.2" \ |
| 46 | + --tasks tinyHellaswag \ |
| 47 | + --batch_size 4 \ |
| 48 | + --output_path '<output_path>' \ |
| 49 | + --log_samples |
| 50 | +``` |
| 51 | + |
| 52 | +Afterwards, run include the correct `file_path` and run the following script: |
| 53 | + |
| 54 | +```python |
| 55 | +import json |
| 56 | +import tinyBenchmarks as tb |
| 57 | +import numpy as np |
| 58 | + |
| 59 | +# Choose benchmark (e.g. hellaswag) |
| 60 | +benchmark = 'hellaswag' # possible benchmarks: |
| 61 | + # ['mmlu','truthfulqa', 'gsm8k', |
| 62 | + # 'winogrande', 'arc', 'hellaswag'] |
| 63 | + |
| 64 | +# Get score vector from output-file (the metric [here `acc_norm`] depends on the benchmark) |
| 65 | +file_path = '<output_path>/<output-file.jsonl>' |
| 66 | +with open(file_path, 'r') as file: |
| 67 | + outputs = json.load(file) |
| 68 | + |
| 69 | +# Ensuring correct order of outputs |
| 70 | +outputs = sorted(outputs, key=lambda x: x['doc_id']) |
| 71 | + |
| 72 | +y = np.array([float(item['acc_norm']) for item in outputs]) |
| 73 | + |
| 74 | +### Evaluation |
| 75 | +tb.evaluate(y, benchmark) |
| 76 | +``` |
| 77 | + |
| 78 | +### Performance |
| 79 | + |
| 80 | +We report in the following tables the average estimation error in the test set (using data from the paper) and standard deviation across LLMs. |
| 81 | + |
| 82 | +#### Open LLM Leaderboard |
| 83 | + |
| 84 | +Estimating performance for each scenario separately |
| 85 | +|| IRT | p-IRT | gp-IRT | |
| 86 | +|--|--|--|--| |
| 87 | +| TruthfulQA | 0.013 (0.010) | 0.010 (0.009) | 0.011 (0.009) | |
| 88 | +| GSM8K | 0.022 (0.017) | 0.029 (0.022) | 0.020 (0.017) | |
| 89 | +| Winogrande | 0.022 (0.017) | 0.016 (0.014) | 0.015 (0.013) | |
| 90 | +| ARC | 0.022 (0.018) | 0.017 (0.014) | 0.017 (0.013) | |
| 91 | +| HellaSwag | 0.013 (0.016) | 0.015 (0.012) | 0.015 (0.012) | |
| 92 | +| MMLU | 0.024 (0.017) | 0.016 (0.015) | 0.016 (0.015) | |
| 93 | + |
| 94 | +Estimating performance for each scenario all at once |
| 95 | +|| IRT | p-IRT | gp-IRT | |
| 96 | +|--|--|--|--| |
| 97 | +| TruthfulQA | 0.013 (0.010) | 0.016 (0.013) | 0.011 (0.009) | |
| 98 | +| GSM8K | 0.022 (0.017) | 0.022 (0.017) | 0.020 (0.015) | |
| 99 | +| Winogrande | 0.022 (0.017) | 0.011 (0.013) | 0.011 (0.011) | |
| 100 | +| ARC | 0.022 (0.018) | 0.012 (0.010) | 0.010 (0.009) | |
| 101 | +| HellaSwag | 0.013 (0.016) | 0.011 (0.020) | 0.011 (0.018) | |
| 102 | +| MMLU | 0.024 (0.018) | 0.017 (0.017) | 0.015 (0.015) | |
| 103 | + |
| 104 | + |
| 105 | + |
| 106 | +### Citation |
| 107 | + |
| 108 | +``` |
| 109 | +@article{polo2024tinybenchmarks, |
| 110 | + title={tinyBenchmarks: evaluating LLMs with fewer examples}, |
| 111 | + author={Maia Polo, Felipe and Weber, Lucas and Choshen, Leshem and Sun, Yuekai and Xu, Gongjun and Yurochkin, Mikhail}, |
| 112 | + journal={arXiv preprint arXiv:2402.14992}, |
| 113 | + year={2024} |
| 114 | + } |
| 115 | +``` |
| 116 | + |
| 117 | +Please also reference the respective original dataset that you are using! |
| 118 | + |
| 119 | +### Checklist |
| 120 | + |
| 121 | +For adding novel benchmarks/datasets to the library: |
| 122 | +* [x] Is the task an existing benchmark in the literature? |
| 123 | + * [x] Have you referenced the original paper that introduced the task? |
| 124 | + * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? |
| 125 | + |
| 126 | + |
| 127 | +If other tasks on this dataset are already supported: |
| 128 | +* [x] Is the "Main" variant of this task clearly denoted? |
| 129 | +* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? |
| 130 | +* [x] Have you noted which, if any, published evaluation setups are matched by this variant? |
0 commit comments