Update README.md
Browse files
README.md
CHANGED
@@ -154,10 +154,12 @@ If we try to replicate OpenLLM Leaderboard results on available Serbian datasets
|
|
154 |
| | ARC | Hellaswag | Winogrande | TruthfulQA | Avg. |
|
155 |
|---------|-------|-----------|------------|------------|-------|
|
156 |
| Tito-7B | 47.27 | - | 69.93 | **57.48** | 58.23 |
|
157 |
-
| YugoGPT | 44.03 | - | 70.64 | 48.06 | 54.24 |
|
158 |
| [Perucac-7B](https://huggingface.co/Stopwolf/Perucac-7B-slerp) | **49.74** | - | **71.98** | 56.03 | **59.25** |
|
159 |
-
|
|
|
|
|
|
160 |
|
|
|
161 |
|
162 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
163 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Stopwolf__Tito-7B-slerp)
|
|
|
154 |
| | ARC | Hellaswag | Winogrande | TruthfulQA | Avg. |
|
155 |
|---------|-------|-----------|------------|------------|-------|
|
156 |
| Tito-7B | 47.27 | - | 69.93 | **57.48** | 58.23 |
|
|
|
157 |
| [Perucac-7B](https://huggingface.co/Stopwolf/Perucac-7B-slerp) | **49.74** | - | **71.98** | 56.03 | **59.25** |
|
158 |
+
| YugoGPT | 44.03 | - | 70.64 | 48.06 | 54.24 |
|
159 |
+
| Llama3-8B | 42.24 | - | 61.25 | 51.08 | 51.52 |
|
160 |
+
| SambaLingo | 37.88 | - | 61.48 | 47.23 | 48.86 |
|
161 |
|
162 |
+
Note that YugoGPT, Llama3 and SambaLingo are all base models, unlike Tito and Perucac.
|
163 |
|
164 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
165 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Stopwolf__Tito-7B-slerp)
|