Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -55,3 +55,17 @@ The following hyperparameters were used during training:
|
|
55 |
- Pytorch 2.0.1+cu117
|
56 |
- Datasets 2.14.3
|
57 |
- Tokenizers 0.13.3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
- Pytorch 2.0.1+cu117
|
56 |
- Datasets 2.14.3
|
57 |
- Tokenizers 0.13.3
|
58 |
+
|
59 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
60 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_synapsoft__Llama-2-7b-hf-flan2022-1.2M)
|
61 |
+
|
62 |
+
| Metric | Value |
|
63 |
+
|-----------------------|---------------------------|
|
64 |
+
| Avg. | 41.68 |
|
65 |
+
| ARC (25-shot) | 23.29 |
|
66 |
+
| HellaSwag (10-shot) | 78.46 |
|
67 |
+
| MMLU (5-shot) | 42.33 |
|
68 |
+
| TruthfulQA (0-shot) | 37.97 |
|
69 |
+
| Winogrande (5-shot) | 75.53 |
|
70 |
+
| GSM8K (5-shot) | 4.47 |
|
71 |
+
| DROP (3-shot) | 29.66 |
|