Update README.md
Browse files
README.md
CHANGED
@@ -1,60 +1,61 @@
|
|
1 |
-
---
|
2 |
-
base_model:
|
3 |
-
- v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
|
4 |
-
- rombodawg/Rombos-LLM-V2.6-Qwen-14b
|
5 |
-
- huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
|
6 |
-
- Qwen/Qwen2.5-14B
|
7 |
-
- Qwen/Qwen2.5-14B-Instruct
|
8 |
-
library_name: transformers
|
9 |
-
tags:
|
10 |
-
- mergekit
|
11 |
-
- merge
|
12 |
-
|
13 |
-
---
|
14 |
-
# BlackSheep
|
15 |
-
|
16 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
* [
|
28 |
-
* [
|
29 |
-
* [
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
|
4 |
+
- rombodawg/Rombos-LLM-V2.6-Qwen-14b
|
5 |
+
- huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
|
6 |
+
- Qwen/Qwen2.5-14B
|
7 |
+
- Qwen/Qwen2.5-14B-Instruct
|
8 |
+
library_name: transformers
|
9 |
+
tags:
|
10 |
+
- mergekit
|
11 |
+
- merge
|
12 |
+
|
13 |
+
---
|
14 |
+
# BlackSheep
|
15 |
+
|
16 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
17 |
+
Further Fine Tuned on BlackSheep Persona.
|
18 |
+
|
19 |
+
## Merge Details
|
20 |
+
### Merge Method
|
21 |
+
|
22 |
+
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base.
|
23 |
+
|
24 |
+
### Models Merged
|
25 |
+
|
26 |
+
The following models were included in the merge:
|
27 |
+
* [v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno](https://huggingface.co/v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno)
|
28 |
+
* [rombodawg/Rombos-LLM-V2.6-Qwen-14b](https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b)
|
29 |
+
* [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2)
|
30 |
+
* [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
|
31 |
+
|
32 |
+
### Configuration
|
33 |
+
|
34 |
+
The following YAML configuration was used to produce this model:
|
35 |
+
|
36 |
+
```yaml
|
37 |
+
models:
|
38 |
+
- model: rombodawg/Rombos-LLM-V2.6-Qwen-14b # Fine-tune version
|
39 |
+
parameters:
|
40 |
+
weight: 1
|
41 |
+
density: 1
|
42 |
+
- model: v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno # Fine-tune version
|
43 |
+
parameters:
|
44 |
+
weight: 1
|
45 |
+
density: 1
|
46 |
+
- model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 # Fine-tune version
|
47 |
+
parameters:
|
48 |
+
weight: 1
|
49 |
+
density: 1
|
50 |
+
- model: Qwen/Qwen2.5-14B-Instruct # Target model
|
51 |
+
parameters:
|
52 |
+
weight: 1
|
53 |
+
density: 1
|
54 |
+
merge_method: ties
|
55 |
+
base_model: Qwen/Qwen2.5-14B
|
56 |
+
parameters:
|
57 |
+
normalize: true
|
58 |
+
int8_mask: true
|
59 |
+
dtype: bfloat16
|
60 |
+
|
61 |
+
```
|