|
The LLaMA based Pygmalion-7b model: |
|
|
|
https://huggingface.co/PygmalionAI/pygmalion-7b |
|
|
|
Merged alongside Tloen's Alpaca LoRA: |
|
|
|
https://huggingface.co/tloen/alpaca-lora-7b |
|
|
|
|
|
This was done to test whether LoRAs trained for other LLaMA fine tunes worked for Pygmalion, |
|
and to have it available on inference backends that do not support LoRAs just yet. |
|
|
|
Treat this as a normal HF Transformers model. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__Pygmalion_AlpacaLora-7b) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 44.27 | |
|
| ARC (25-shot) | 53.24 | |
|
| HellaSwag (10-shot) | 76.92 | |
|
| MMLU (5-shot) | 35.92 | |
|
| TruthfulQA (0-shot) | 39.44 | |
|
| Winogrande (5-shot) | 72.22 | |
|
| GSM8K (5-shot) | 1.21 | |
|
| DROP (3-shot) | 30.91 | |
|
|