TroyDoesAI commited on
Commit
5fd5669
·
verified ·
1 Parent(s): 4b6f6d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -60
README.md CHANGED
@@ -1,60 +1,61 @@
1
- ---
2
- base_model:
3
- - v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
4
- - rombodawg/Rombos-LLM-V2.6-Qwen-14b
5
- - huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
6
- - Qwen/Qwen2.5-14B
7
- - Qwen/Qwen2.5-14B-Instruct
8
- library_name: transformers
9
- tags:
10
- - mergekit
11
- - merge
12
-
13
- ---
14
- # BlackSheep
15
-
16
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
-
18
- ## Merge Details
19
- ### Merge Method
20
-
21
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base.
22
-
23
- ### Models Merged
24
-
25
- The following models were included in the merge:
26
- * [v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno](https://huggingface.co/v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno)
27
- * [rombodawg/Rombos-LLM-V2.6-Qwen-14b](https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b)
28
- * [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2)
29
- * [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
30
-
31
- ### Configuration
32
-
33
- The following YAML configuration was used to produce this model:
34
-
35
- ```yaml
36
- models:
37
- - model: rombodawg/Rombos-LLM-V2.6-Qwen-14b # Fine-tune version
38
- parameters:
39
- weight: 1
40
- density: 1
41
- - model: v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno # Fine-tune version
42
- parameters:
43
- weight: 1
44
- density: 1
45
- - model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 # Fine-tune version
46
- parameters:
47
- weight: 1
48
- density: 1
49
- - model: Qwen/Qwen2.5-14B-Instruct # Target model
50
- parameters:
51
- weight: 1
52
- density: 1
53
- merge_method: ties
54
- base_model: Qwen/Qwen2.5-14B
55
- parameters:
56
- normalize: true
57
- int8_mask: true
58
- dtype: bfloat16
59
-
60
- ```
 
 
1
+ ---
2
+ base_model:
3
+ - v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
4
+ - rombodawg/Rombos-LLM-V2.6-Qwen-14b
5
+ - huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
6
+ - Qwen/Qwen2.5-14B
7
+ - Qwen/Qwen2.5-14B-Instruct
8
+ library_name: transformers
9
+ tags:
10
+ - mergekit
11
+ - merge
12
+
13
+ ---
14
+ # BlackSheep
15
+
16
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
+ Further Fine Tuned on BlackSheep Persona.
18
+
19
+ ## Merge Details
20
+ ### Merge Method
21
+
22
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base.
23
+
24
+ ### Models Merged
25
+
26
+ The following models were included in the merge:
27
+ * [v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno](https://huggingface.co/v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno)
28
+ * [rombodawg/Rombos-LLM-V2.6-Qwen-14b](https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b)
29
+ * [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2)
30
+ * [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
31
+
32
+ ### Configuration
33
+
34
+ The following YAML configuration was used to produce this model:
35
+
36
+ ```yaml
37
+ models:
38
+ - model: rombodawg/Rombos-LLM-V2.6-Qwen-14b # Fine-tune version
39
+ parameters:
40
+ weight: 1
41
+ density: 1
42
+ - model: v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno # Fine-tune version
43
+ parameters:
44
+ weight: 1
45
+ density: 1
46
+ - model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 # Fine-tune version
47
+ parameters:
48
+ weight: 1
49
+ density: 1
50
+ - model: Qwen/Qwen2.5-14B-Instruct # Target model
51
+ parameters:
52
+ weight: 1
53
+ density: 1
54
+ merge_method: ties
55
+ base_model: Qwen/Qwen2.5-14B
56
+ parameters:
57
+ normalize: true
58
+ int8_mask: true
59
+ dtype: bfloat16
60
+
61
+ ```