|
--- |
|
base_model: |
|
- cognitivecomputations/Dolphin3.0-Llama3.2-3B |
|
- SaisExperiments/Evil-Alpaca-3B-L3.2 |
|
- Nexesenex/Llama_3.2_3b_Kermes_0.20 |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
license: llama3.2 |
|
model-index: |
|
- name: Llama_3.2_3b_Kermes_v2.1 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: wis-k/instruction-following-eval |
|
split: train |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 55.84 |
|
name: averaged accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: SaylorTwift/bbh |
|
split: test |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 22.17 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: lighteval/MATH-Hard |
|
split: test |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 5.21 |
|
name: exact match |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
split: train |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 3.91 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 7.51 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 18.8 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1 |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# about |
|
|
|
The Kermes series is my second attempt at making merges, after the Kostume series. |
|
|
|
On the Kostume series started on the 11/02/0205 I tried to make a triple stock merge of 3 intermediary stock merges of a dozen of model or so. |
|
This, to see if I could pile up their abilities. |
|
Not bad, but nothing special about it, it's a bit hard for me to judge at 3b. |
|
|
|
On the Kermes series started the day after, I defined a simpler approach: |
|
|
|
- Perplexity is the main constraint. Usual L3.2 3b finetunes are around 10.5-11 ppl512wikieng, Hermes is around 9.5. |
|
- I also measure in French and Serbian to observe the variances. |
|
|
|
- Arc Challenge and Easy are the second constraint to judge its basic logics. |
|
- Usual L3.2 3b finetunes hit 40 and 60-65 respectively, Hermes3 hits 47+ and 70+. |
|
|
|
- Lack of censorship. I always keep in mind to pick models compatible with that AMAP. |
|
- This, may it be through the picked models' abliteration or the datasets they use. |
|
|
|
- And of course, the test, both In Kobold/Croco.CPP (spamming very offensive requests), and in ST (a 10k prompt with a big lorebook). |
|
|
|
Kermes series 2 is basically a stock merge on the top of another. |
|
- The goal was to maintain as much the qualities of the models used, so I stay on 1+2 models for the first merge, and 1+2 for the second as well. |
|
|
|
For V2.1 : |
|
- First, DarkHermes as the base, LlamaLoi as the "stabilizator", and Hermes Abliterated. |
|
- That triplet kept the strong benchs of DarkHermes and even.. improved them a bit. |
|
- Second, That Kermes 0.2 served as a base, with.. Evil Aplaca as a wild card (very good Arcs and nasty dataset), and Dophin 3.0 for a quality addition. |
|
|
|
And bingo. Perplexity goes down still, ARC remain stable, it's quite unhinged still, and.. quite coherent, event at 10k+ context. |
|
|
|
I will probably replicate that recipes a bit in the future, first to try to improve Kermes 3b. |
|
And then, go on 8b for the next.. arc of this adventure. |
|
|
|
Kudos go to the model authors, and to the Arcee / MergeKit folks, as well as to HF hosting the MergeKit App. |
|
Also a big-up to SteelSkull, observing him cooking Nevoria decided me to try to make some merges by myself. |
|
|
|
--- |
|
# quantizations |
|
|
|
GGUF static quantizations (Thanks Mradermacher!) : |
|
|
|
https://huggingface.co/mradermacher/Llama_3.2_3b_Kermes_v2.1-GGUF |
|
|
|
GGUF iMatrix quantizations (Thanks Mradermacher!) : |
|
|
|
https://huggingface.co/mradermacher/Llama_3.2_3b_Kermes_v2.1-i1-GGUF |
|
|
|
GGUF custom iMatrix quantizations: |
|
|
|
https://huggingface.co/Nexesenex/Llama_3.2_3b_Kermes_v2.1-iMat-CQ-GGUF |
|
|
|
--- |
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Nexesenex/Llama_3.2_3b_Kermes_0.20](https://huggingface.co/Nexesenex/Llama_3.2_3b_Kermes_0.20) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [cognitivecomputations/Dolphin3.0-Llama3.2-3B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.2-3B) |
|
* [SaisExperiments/Evil-Alpaca-3B-L3.2](https://huggingface.co/SaisExperiments/Evil-Alpaca-3B-L3.2) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
merge_method: model_stock |
|
models: |
|
- model: SaisExperiments/Evil-Alpaca-3B-L3.2 |
|
parameters: |
|
weight: 1.0 |
|
- model: cognitivecomputations/Dolphin3.0-Llama3.2-3B |
|
parameters: |
|
weight: 1.0 |
|
base_model: Nexesenex/Llama_3.2_3b_Kermes_0.20 |
|
dtype: float16 |
|
normalize: true |
|
``` |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Nexesenex__Llama_3.2_3b_Kermes_v2.1-details)! |
|
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)! |
|
|
|
| Metric |Value (%)| |
|
|-------------------|--------:| |
|
|**Average** | 18.91| |
|
|IFEval (0-Shot) | 55.84| |
|
|BBH (3-Shot) | 22.17| |
|
|MATH Lvl 5 (4-Shot)| 5.21| |
|
|GPQA (0-shot) | 3.91| |
|
|MuSR (0-shot) | 7.51| |
|
|MMLU-PRO (5-shot) | 18.80| |
|
|
|
|