gpt-j-tiny-random / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
6fd8b62
|
raw
history blame
656 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 24.79
ARC (25-shot) 26.37
HellaSwag (10-shot) 25.76
MMLU (5-shot) 24.46
TruthfulQA (0-shot) 47.44
Winogrande (5-shot) 49.49
GSM8K (5-shot) 0.0
DROP (3-shot) 0.01