File size: 7,291 Bytes
d32c825
 
 
 
 
 
 
 
 
e1234ee
fd1d8e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d32c825
ad45a05
42ab0ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad45a05
63483c2
ad45a05
 
 
63483c2
 
 
 
 
ad45a05
8ded8dd
ad45a05
42ab0ba
d32c825
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd1d8e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
---
base_model:
- cognitivecomputations/Dolphin3.0-Llama3.2-3B
- SaisExperiments/Evil-Alpaca-3B-L3.2
- Nexesenex/Llama_3.2_3b_Kermes_0.20
library_name: transformers
tags:
- mergekit
- merge
license: llama3.2
model-index:
- name: Llama_3.2_3b_Kermes_v2.1
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: wis-k/instruction-following-eval
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 55.84
      name: averaged accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: SaylorTwift/bbh
      split: test
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 22.17
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: lighteval/MATH-Hard
      split: test
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 5.21
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 3.91
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 7.51
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 18.8
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1
      name: Open LLM Leaderboard
---

# about

The Kermes series is my second attempt at making merges, after the Kostume series.

On the Kostume series started on the 11/02/0205 I tried to make a triple stock merge of 3 intermediary stock merges of a dozen of model or so.
This, to see if I could pile up their abilities.
Not bad, but nothing special about it, it's a bit hard for me to judge at 3b.

On the Kermes series started the day after, I defined a simpler approach:

- Perplexity is the main constraint. Usual L3.2 3b finetunes are around 10.5-11 ppl512wikieng, Hermes is around 9.5.
- I also measure in French and Serbian to observe the variances.
  
- Arc Challenge and Easy are the second constraint to judge its basic logics.
- Usual L3.2 3b finetunes hit 40 and 60-65 respectively, Hermes3 hits 47+ and 70+.

- Lack of censorship. I always keep in mind to pick models compatible with that AMAP.
- This, may it be through the picked models' abliteration or the datasets they use.

- And of course, the test, both In Kobold/Croco.CPP (spamming very offensive requests), and in ST (a 10k prompt with a big lorebook).

Kermes series 2 is basically a stock merge on the top of another.
- The goal was to maintain as much the qualities of the models used, so I stay on 1+2 models for the first merge, and 1+2 for the second as well.

For V2.1 :
- First, DarkHermes as the base, LlamaLoi as the "stabilizator", and Hermes Abliterated.
- That triplet kept the strong benchs of DarkHermes and even.. improved them a bit.
- Second, That Kermes 0.2 served as a base, with.. Evil Aplaca as a wild card (very good Arcs and nasty dataset), and Dophin 3.0 for a quality addition.

And bingo. Perplexity goes down still, ARC remain stable, it's quite unhinged still, and.. quite coherent, event at 10k+ context.

I will probably replicate that recipes a bit in the future, first to try to improve Kermes 3b.
And then, go on 8b for the next.. arc of this adventure.

Kudos go to the model authors, and to the Arcee / MergeKit folks, as well as to HF hosting the MergeKit App.
Also a big-up to SteelSkull, observing him cooking Nevoria decided me to try to make some merges by myself.

---
# quantizations

GGUF static quantizations (Thanks Mradermacher!) :

https://huggingface.co/mradermacher/Llama_3.2_3b_Kermes_v2.1-GGUF

GGUF iMatrix quantizations (Thanks Mradermacher!) :

https://huggingface.co/mradermacher/Llama_3.2_3b_Kermes_v2.1-i1-GGUF

GGUF custom iMatrix quantizations: 

https://huggingface.co/Nexesenex/Llama_3.2_3b_Kermes_v2.1-iMat-CQ-GGUF

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Nexesenex/Llama_3.2_3b_Kermes_0.20](https://huggingface.co/Nexesenex/Llama_3.2_3b_Kermes_0.20) as a base.

### Models Merged

The following models were included in the merge:
* [cognitivecomputations/Dolphin3.0-Llama3.2-3B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.2-3B)
* [SaisExperiments/Evil-Alpaca-3B-L3.2](https://huggingface.co/SaisExperiments/Evil-Alpaca-3B-L3.2)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
merge_method: model_stock
models:
  - model: SaisExperiments/Evil-Alpaca-3B-L3.2
    parameters:
      weight: 1.0
  - model: cognitivecomputations/Dolphin3.0-Llama3.2-3B
    parameters:
      weight: 1.0
base_model: Nexesenex/Llama_3.2_3b_Kermes_0.20
dtype: float16
normalize: true
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Nexesenex__Llama_3.2_3b_Kermes_v2.1-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Nexesenex%2FLlama_3.2_3b_Kermes_v2.1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!

|      Metric       |Value (%)|
|-------------------|--------:|
|**Average**        |    18.91|
|IFEval (0-Shot)    |    55.84|
|BBH (3-Shot)       |    22.17|
|MATH Lvl 5 (4-Shot)|     5.21|
|GPQA (0-shot)      |     3.91|
|MuSR (0-shot)      |     7.51|
|MMLU-PRO (5-shot)  |    18.80|