ShearedPlats-7b
An experimental finetune of Sheared LLaMA 2.7b with Alpaca-QLoRA (version 2)Datasets
Trained on alpca style datasetsPrompt Template
Uses alpaca style prompt templateOpen LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 36.72 |
ARC (25-shot) | 42.41 |
HellaSwag (10-shot) | 72.58 |
MMLU (5-shot) | 27.52 |
TruthfulQA (0-shot) | 39.76 |
Winogrande (5-shot) | 65.9 |
GSM8K (5-shot) | 1.52 |
DROP (3-shot) | 7.34 |
- Downloads last month
- 2,752
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.