CodeGPT-small-py / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
c65ef3d
|
raw
history blame
656 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.24
ARC (25-shot) 22.7
HellaSwag (10-shot) 27.26
MMLU (5-shot) 25.05
TruthfulQA (0-shot) 51.23
Winogrande (5-shot) 48.78
GSM8K (5-shot) 0.0
DROP (3-shot) 1.64