leaderboard-pr-bot's picture
Adding Evaluation Results
26d1bd4
|
raw
history blame
729 Bytes

https://wandb.ai/open-assistant/supervised-finetuning/runs/t88j2m4k

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 49.9
ARC (25-shot) 56.31
HellaSwag (10-shot) 79.32
MMLU (5-shot) 47.03
TruthfulQA (0-shot) 48.42
Winogrande (5-shot) 76.95
GSM8K (5-shot) 16.07
DROP (3-shot) 25.22