divinetaco commited on
Commit
4e8e871
·
verified ·
1 Parent(s): 83b48e6

Upload /README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +21 -11
README.md CHANGED
@@ -1,7 +1,11 @@
1
  ---
2
- base_model: []
 
 
3
  library_name: transformers
4
  tags:
 
 
5
  - mergekit
6
  - merge
7
 
@@ -11,18 +15,24 @@ tags:
11
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
13
  ## Merge Details
14
- ### Merge Method
 
 
 
 
15
 
16
  This model was merged using the sce merge method using deepseek-r1-distill-llama-70b as a base.
17
 
 
 
18
  ### Models Merged
19
 
20
  The following models were included in the merge:
21
- * Nautilus-70B-v0.1
22
- * doctor-shotgun-magnum-v4-se-70b
23
- * llama-3.3-70b-instruct
24
- * sicariussicariistuff-negative-llama-3.3-70b
25
- * sao10k-70b-l3.3-cirrus-x1
26
 
27
  ### Configuration
28
 
@@ -33,10 +43,10 @@ models:
33
  # Pivot model
34
  - model: llama-3.3-70b-instruct
35
  # Target models
36
- - model: sao10k-70b-l3.3-cirrus-x1
37
- - model: Nautilus-70B-v0.1
38
- - model: sicariussicariistuff-negative-llama-3.3-70b
39
- - model: doctor-shotgun-magnum-v4-se-70b
40
  merge_method: sce
41
  base_model: deepseek-r1-distill-llama-70b
42
  parameters:
 
1
  ---
2
+ license: llama3.3
3
+ base_model:
4
+ - deepseek-ai/DeepSeek-R1-Distill-Llama-70B
5
  library_name: transformers
6
  tags:
7
+ - not-for-all-audiences
8
+ - nsfw
9
  - mergekit
10
  - merge
11
 
 
15
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
 
17
  ## Merge Details
18
+
19
+ An RP merge with a focus on:
20
+ <br>\- model intelligence
21
+ <br>\- removing positive bias
22
+ <br>\- creativity
23
 
24
  This model was merged using the sce merge method using deepseek-r1-distill-llama-70b as a base.
25
 
26
+ <img src="https://huggingface.co/divinetaco/L3.3-70B-Lycosa-v0.1/resolve/main/lycosa.png">
27
+
28
  ### Models Merged
29
 
30
  The following models were included in the merge:
31
+ * deepseek-ai/DeepSeek-R1-Distill-Llama-70B
32
+ * Sao10K/70B-L3.3-Cirrus-x1
33
+ * TheDrummer/Nautilus-70B-v0.1
34
+ * Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
35
+ * SicariusSicariiStuff/Negative_LLAMA_70B
36
 
37
  ### Configuration
38
 
 
43
  # Pivot model
44
  - model: llama-3.3-70b-instruct
45
  # Target models
46
+ - model: Sao10K/70B-L3.3-Cirrus-x1
47
+ - model: TheDrummer/Nautilus-70B-v0.1
48
+ - model: Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
49
+ - model: SicariusSicariiStuff/Negative_LLAMA_70B
50
  merge_method: sce
51
  base_model: deepseek-r1-distill-llama-70b
52
  parameters: