File size: 7,255 Bytes
3d575d1
 
0f482a5
 
3d575d1
 
 
 
 
 
 
 
0f482a5
 
57b1f56
 
6b42faf
 
 
 
ad7f283
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3d575d1
f2eb4ef
3d575d1
 
 
f2eb4ef
9289f11
1ec0cd1
9289f11
dda6105
2b9430f
dda6105
639f856
f2eb4ef
f13780c
f2eb4ef
258264e
f2eb4ef
2b9430f
 
 
 
 
 
e65f81c
3cb2d7e
0214564
f2eb4ef
dda6105
2b9430f
dda6105
 
 
 
 
258264e
dda6105
 
 
3cb2d7e
2b9430f
e65f81c
3cb2d7e
0214564
2b9430f
dda6105
f188dd0
dda6105
0459e62
2cc6d1b
f188dd0
 
3d575d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad7f283
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
---
base_model:
- Novaciano/LAMED
- Novaciano/VAV
- Novaciano/TAV
- Novaciano/YOD
- Novaciano/NUN-FINAL
- Novaciano/BAPHOMET
library_name: transformers
tags:
- mergekit
- merge
- abliterated
- uncensored
- llama
- llama3.2
- not-for-all-audiences
language:
- en
- es
model-index:
- name: Sigil-Of-Satan-3.2-1B
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: wis-k/instruction-following-eval
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 54.94
      name: averaged accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FSigil-Of-Satan-3.2-1B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: SaylorTwift/bbh
      split: test
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 9.4
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FSigil-Of-Satan-3.2-1B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: lighteval/MATH-Hard
      split: test
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 5.44
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FSigil-Of-Satan-3.2-1B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 1.45
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FSigil-Of-Satan-3.2-1B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 1.42
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FSigil-Of-Satan-3.2-1B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 9.5
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FSigil-Of-Satan-3.2-1B
      name: Open LLM Leaderboard
---
## merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

# Merge Details

<center> <img src="https://i.ibb.co/jkpYCJLs/pngimg-com-pentagram-PNG23.png" alt="pngimg-com-pentagram-PNG23" border="0">  </center>

---
# 🇬🇧 English

**Remixed version of HarmfulProject-3.2-1B model.**

I'll note that there's nothing unusual about it other than being a mix of mixes of uncensored and abliterated models.

The difference with [HarmfulProject-3.2-1B](https://huggingface.co/Novaciano/UNCENSORED-HarmfulProject-3.2-1B) is that instead of being mixed in one model, the same models have been mixed in smaller mixes in this model.

## Author's Note:

I'm not responsible for the content of the model since I only made the mix, I didn't inject any dataset into it... yet.

### Others models created with this merge

- [Cultist-3.2-1B](https://huggingface.co/Novaciano/Cultist-3.2-1B) **- Sigil-of-Satan was used as a base, may contain LEWD data.**
- [LEWD-Mental-Cultist-3.2-1B](https://huggingface.co/Novaciano/LEWD-Mental-Cultist-3.2-1B) **- LEWD-Mental-Occult was used as a base, may be more explicit.**
- [ASTAROTH-3.2-1B](https://huggingface.co/Novaciano/ASTAROTH-3.2-1B) **- ASTAROTH is the definitive merge of all previos merges.**

---
# 🇪🇦 Español

**Versión remezclada / remixada del modelo HarmfulProject-3.2-1B.**

Aviso desde ya que no tiene nada raro mas allá de ser una mezcla de mezclas de modelos abliterados y sin censura.

La diferencia con [HarmfulProject-3.2-1B](https://huggingface.co/Novaciano/UNCENSORED-HarmfulProject-3.2-1B) es que en vez de haber sido mezclados en un modelo en este modelo se ha mezclado los mismos modelos pero en mezclas más pequeñas.

**AVISO:** No me hago cargo del contenido del modelo ya que solo hice la mezcla, no le inyecté dataset alguno... aún.

### Otros modelos creados con esta mezclas

- [Cultist-3.2-1B](https://huggingface.co/Novaciano/Cultist-3.2-1B) **- Sigil-of-Satan fue usado como base, puede contener datos LEWD.**
- [LEWD-Mental-Cultist-3.2-1B](https://huggingface.co/Novaciano/LEWD-Mental-Cultist-3.2-1B) **- LEWD-Mental-Occult fue usado como base, puede que sea más explicito.**
- [ASTAROTH-3.2-1B](https://huggingface.co/Novaciano/ASTAROTH-3.2-1B) **- ASTAROTH es la mezcla definitiva de todas mis mezclas anteriores.**

---
## Quants / Cuantizaciones

- **Static Quants:** [mradermacher/UNCENSORED-Sigil-Of-Satan-3.2-1B-GGUF](https://huggingface.co/mradermacher/UNCENSORED-Sigil-Of-Satan-3.2-1B-GGUF)
- **Weighed/iMatrix:** [En proceso...]()

---
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Novaciano/BAPHOMET](https://huggingface.co/Novaciano/BAPHOMET) as a base.

### Models Merged

The following models were included in the merge:
* [Novaciano/TAV](https://huggingface.co/Novaciano/TAV)
* [Novaciano/YOD](https://huggingface.co/Novaciano/YOD)
* [Novaciano/VAV](https://huggingface.co/Novaciano/VAV)
* [Novaciano/LAMED](https://huggingface.co/Novaciano/LAMED)
* [Novaciano/NUN-FINAL](https://huggingface.co/Novaciano/NUN-FINAL)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
- model: Novaciano/LAMED
- model: Novaciano/VAV
- model: Novaciano/YOD
- model: Novaciano/TAV
- model: Novaciano/NUN-FINAL

merge_method: model_stock
base_model: Novaciano/BAPHOMET
dtype: bfloat16
parameters:
  t: [0, 0.5, 1, 0.5, 0]
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Novaciano__Sigil-Of-Satan-3.2-1B-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Novaciano%2FSigil-Of-Satan-3.2-1B&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!

|      Metric       |Value (%)|
|-------------------|--------:|
|**Average**        |    13.69|
|IFEval (0-Shot)    |    54.94|
|BBH (3-Shot)       |     9.40|
|MATH Lvl 5 (4-Shot)|     5.44|
|GPQA (0-shot)      |     1.45|
|MuSR (0-shot)      |     1.42|
|MMLU-PRO (5-shot)  |     9.50|