Novaciano commited on
Commit
bd401d6
·
verified ·
1 Parent(s): c352ba8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -31
README.md CHANGED
@@ -24,65 +24,106 @@ base_model:
24
  - nicoboss/Llama-3.2-1B-Instruct-Uncensored
25
  - mylesgoose/Llama-3.2-1B-Instruct-abliterated3
26
  - Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
 
 
 
 
 
 
 
 
27
  library_name: transformers
28
  tags:
 
 
29
  - mergekit
30
  - merge
31
-
 
 
 
 
 
 
 
 
 
32
  ---
33
- # merge
34
-
35
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
36
 
37
- ## Merge Details
38
- ### Merge Method
39
 
40
- This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [carsenk/llama3.2_1b_2025_uncensored_v2](https://huggingface.co/carsenk/llama3.2_1b_2025_uncensored_v2) as a base.
41
 
42
- ### Models Merged
43
 
44
  The following models were included in the merge:
45
- * [brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated](https://huggingface.co/brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated)
46
- * [xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora](https://huggingface.co/xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora)
47
- * [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25)
48
  * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25)
 
 
49
  * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
50
- * [rbc33/Llama-3.2-1B-Instruct-Abliterated](https://huggingface.co/rbc33/Llama-3.2-1B-Instruct-Abliterated)
51
- * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO)
52
  * [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
53
- * [ShuoGZ/llama-3.2-1B-Instruct-abliterated](https://huggingface.co/ShuoGZ/llama-3.2-1B-Instruct-abliterated)
54
- * [nztinversive/llama3.2-1b-Uncensored](https://huggingface.co/nztinversive/llama3.2-1b-Uncensored)
55
  * [Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25)
56
- * [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25)
57
- * [Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv)
58
- * [Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL](https://huggingface.co/Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL)
59
  * [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
60
  * [huihui-ai/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated)
61
- * [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1)
62
  * [Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  * [KidIkaros/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/KidIkaros/Llama-3.2-1B-Instruct-abliterated)
64
- * [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv)
65
- * [nicoboss/Llama-3.2-1B-Instruct-Uncensored](https://huggingface.co/nicoboss/Llama-3.2-1B-Instruct-Uncensored)
 
66
  * [mylesgoose/Llama-3.2-1B-Instruct-abliterated3](https://huggingface.co/mylesgoose/Llama-3.2-1B-Instruct-abliterated3)
 
 
 
 
 
 
 
 
 
 
67
  * [Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated](https://huggingface.co/Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated)
 
 
 
 
68
 
69
- ### Configuration
70
 
71
- The following YAML configuration was used to produce this model:
 
 
72
 
73
  ```yaml
74
  models:
75
  - model: xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora
 
76
  - model: Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
77
  - model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
78
  - model: KidIkaros/Llama-3.2-1B-Instruct-abliterated
79
- - model: nztinversive/llama3.2-1b-Uncensored
80
- - model: brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated
81
- - model: rbc33/Llama-3.2-1B-Instruct-Abliterated
82
- - model: mylesgoose/Llama-3.2-1B-Instruct-abliterated3
83
- - model: ShuoGZ/llama-3.2-1B-Instruct-abliterated
84
- - model: nicoboss/Llama-3.2-1B-Instruct-Uncensored
85
- - model: Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL
86
  - model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25
87
  - model: Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25
88
  - model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
@@ -95,9 +136,14 @@ models:
95
  - model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25
96
  - model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv
97
  - model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv
 
 
 
 
 
98
  merge_method: model_stock
99
  base_model: carsenk/llama3.2_1b_2025_uncensored_v2
100
  dtype: bfloat16
101
  parameters:
102
  t: [0, 0.5, 1, 0.5, 0]
103
- ```
 
24
  - nicoboss/Llama-3.2-1B-Instruct-Uncensored
25
  - mylesgoose/Llama-3.2-1B-Instruct-abliterated3
26
  - Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
27
+ datasets:
28
+ - mlabonne/FineTome-100k
29
+ - microsoft/orca-math-word-problems-200k
30
+ - m-a-p/CodeFeedback-Filtered-Instruction
31
+ - cognitivecomputations/dolphin-coder
32
+ - PawanKrd/math-gpt-4o-200k
33
+ - V3N0M/Jenna-50K-Alpaca-Uncensored
34
+ - FreedomIntelligence/medical-o1-reasoning-SFT
35
  library_name: transformers
36
  tags:
37
+ - llama3.2
38
+ - llama
39
  - mergekit
40
  - merge
41
+ - llama-cpp
42
+ - nsfw
43
+ - uncensored
44
+ - abliterated
45
+ - 1b
46
+ - 4-bit
47
+ - not-for-all-audiences
48
+ language:
49
+ - es
50
+ - en
51
  ---
52
+ <center> <h4><b>HARMFUL PROJECT</b></h4>
53
+ <img src="https://i.ibb.co/3yqnMb7z/AQMEx-J7-A5c-F5r-SWsn8-CVc-Qms-Fa-RKi6y-Zsnp7-L5ca-Afcws-OKi-WDQLs-Mm0-YH6i-DEke-V6-HHIf-P0-XVBEbrb.gif" alt="AQMEx-J7-A5c-F5r-SWsn8-CVc-Qms-Fa-RKi6y-Zsnp7-L5ca-Afcws-OKi-WDQLs-Mm0-YH6i-DEke-V6-HHIf-P0-XVBEbrb-" border="0"></a> </center>
 
54
 
55
+ ### CORRECTED VERSION OF HARMFUL PROJECT 3.2 1B
 
56
 
57
+ ## English 🇬🇧
58
 
59
+ This is a personal project to mix all uncensored and abliterated models into one model. Each one contains its injected datasets that can be found in the HuggingFace dataset repository, so I am not responsible for what may be found.
60
 
61
  The following models were included in the merge:
62
+ * [archit11/Llama-1B-abliterated](https://huggingface.co/archit11/Llama-1B-abliterated)
63
+ * [KidIkaros/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/KidIkaros/Llama-3.2-1B-Instruct-abliterated)
64
+ * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO)
65
  * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25)
66
+ * [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1)
67
+ * [mylesgoose/Llama-3.2-1B-Instruct-abliterated3](https://huggingface.co/mylesgoose/Llama-3.2-1B-Instruct-abliterated3)
68
  * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
69
+ * [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25)
70
+ * [xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora](https://huggingface.co/xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora)
71
  * [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
72
+ * [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv)
73
+ * [brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated](https://huggingface.co/brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated)
74
  * [Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25)
75
+ * [ShuoGZ/llama-3.2-1B-Instruct-abliterated](https://huggingface.co/ShuoGZ/llama-3.2-1B-Instruct-abliterated)
 
 
76
  * [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
77
  * [huihui-ai/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated)
78
+ * [Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated](https://huggingface.co/Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated)
79
  * [Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
80
+ * [rbc33/Llama-3.2-1B-Instruct-Abliterated](https://huggingface.co/rbc33/Llama-3.2-1B-Instruct-Abliterated)
81
+ * [Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv)
82
+ * [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25)
83
+
84
+ If you want to participate in such a project, inject your Llama 3.2 1B model with data you think is required and let me know so I can put it in a new mix.👌
85
+
86
+ ---
87
+
88
+ ## Español 🇪🇦
89
+
90
+ Se trata de un proyecto personal para mezclar en un modelo todos los modelos sin censura y abliterados. Cada cual contiene sus datasets inyectados que pueden encontrarse en el repositorio de datasets de HuggingFace, por lo que no me hago responsable de lo que pueda encontrar.
91
+
92
+ Modelos incluidos en la mezcla:
93
+
94
  * [KidIkaros/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/KidIkaros/Llama-3.2-1B-Instruct-abliterated)
95
+ * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO)
96
+ * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25)
97
+ * [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1)
98
  * [mylesgoose/Llama-3.2-1B-Instruct-abliterated3](https://huggingface.co/mylesgoose/Llama-3.2-1B-Instruct-abliterated3)
99
+ * [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
100
+ * [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25)
101
+ * [xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora](https://huggingface.co/xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora)
102
+ * [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
103
+ * [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv)
104
+ * [brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated](https://huggingface.co/brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated)
105
+ * [Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25)
106
+ * [ShuoGZ/llama-3.2-1B-Instruct-abliterated](https://huggingface.co/ShuoGZ/llama-3.2-1B-Instruct-abliterated)
107
+ * [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
108
+ * [huihui-ai/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated)
109
  * [Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated](https://huggingface.co/Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated)
110
+ * [Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
111
+ * [rbc33/Llama-3.2-1B-Instruct-Abliterated](https://huggingface.co/rbc33/Llama-3.2-1B-Instruct-Abliterated)
112
+ * [Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv)
113
+ * [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25)
114
 
 
115
 
116
+ Si desean participar en tal proyecto inyecte su modelo Llama 3.2 1B con datos que crean requeridos y hazmelo saber así lo meto en una nueva mezcla.👌
117
+
118
+ ### Configuration / Configuración
119
 
120
  ```yaml
121
  models:
122
  - model: xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora
123
+ - model: carsenk/llama3.2_1b_2025_uncensored_v2
124
  - model: Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
125
  - model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
126
  - model: KidIkaros/Llama-3.2-1B-Instruct-abliterated
 
 
 
 
 
 
 
127
  - model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25
128
  - model: Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25
129
  - model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
 
136
  - model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25
137
  - model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv
138
  - model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv
139
+ - model: mylesgoose/Llama-3.2-1B-Instruct-abliterated3
140
+ - model: ShuoGZ/llama-3.2-1B-Instruct-abliterated
141
+ - model: brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated
142
+ - model: rbc33/Llama-3.2-1B-Instruct-Abliterated
143
+
144
  merge_method: model_stock
145
  base_model: carsenk/llama3.2_1b_2025_uncensored_v2
146
  dtype: bfloat16
147
  parameters:
148
  t: [0, 0.5, 1, 0.5, 0]
149
+ ```