Tarek07 commited on
Commit
0fed708
·
verified ·
1 Parent(s): 332c697

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -17
README.md CHANGED
@@ -1,29 +1,37 @@
1
  ---
2
- base_model: []
 
 
 
 
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
-
8
  ---
9
- # PrimogentorSCE3
 
 
10
 
11
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
13
  ## Merge Details
14
  ### Merge Method
15
 
16
- This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using downloads/Llama-3.1-Nemotron-lorablated-70B as a base.
17
 
18
  ### Models Merged
19
 
20
  The following models were included in the merge:
21
- * downloads/Wayfarer-Large-70B-Llama-3.3
22
- * downloads/EVA-LLaMA-3.33-70B-v0.1
23
- * downloads/Negative_LLAMA_70B
24
- * downloads/L3.1-70B-Hanami-x1
25
- * downloads/70B-L3.3-Cirrus-x1
26
- * downloads/Anubis-70B-v1
27
 
28
  ### Configuration
29
 
@@ -31,14 +39,14 @@ The following YAML configuration was used to produce this model:
31
 
32
  ```yaml
33
  models:
34
- - model: downloads/L3.1-70B-Hanami-x1
35
- - model: downloads/70B-L3.3-Cirrus-x1
36
- - model: downloads/Wayfarer-Large-70B-Llama-3.3
37
- - model: downloads/Negative_LLAMA_70B
38
- - model: downloads/Anubis-70B-v1
39
- - model: downloads/EVA-LLaMA-3.33-70B-v0.1
40
  merge_method: sce
41
- base_model: downloads/Llama-3.1-Nemotron-lorablated-70B
42
  parameters:
43
  select_topk: 0.17
44
  out_dtype: bfloat16
 
1
  ---
2
+ base_model:
3
+ - Sao10K/L3.1-70B-Hanami-x1
4
+ - Sao10K/70B-L3.3-Cirrus-x1
5
+ - LatitudeGames/Wayfarer-Large-70B-Llama-3.3
6
+ - SicariusSicariiStuff/Negative_LLAMA_70B
7
+ - TheDrummer/Anubis-70B-v1
8
+ - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
9
  library_name: transformers
10
  tags:
11
  - mergekit
12
  - merge
13
+ license: llama3.3
14
  ---
15
+ # Primogenitor V2
16
+
17
+ Same ingredients as Progenitor, but with Wayfarer in the mix!
18
 
19
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
20
 
21
  ## Merge Details
22
  ### Merge Method
23
 
24
+ This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using nbeerbower/Llama-3.1-Nemotron-lorablated-70B as a base.
25
 
26
  ### Models Merged
27
 
28
  The following models were included in the merge:
29
+ * Sao10K/L3.1-70B-Hanami-x1
30
+ * Sao10K/70B-L3.3-Cirrus-x1
31
+ * LatitudeGames/Wayfarer-Large-70B-Llama-3.3
32
+ * SicariusSicariiStuff/Negative_LLAMA_70B
33
+ * TheDrummer/Anubis-70B-v1
34
+ * EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
35
 
36
  ### Configuration
37
 
 
39
 
40
  ```yaml
41
  models:
42
+ - model: Sao10K/L3.1-70B-Hanami-x1
43
+ - model: Sao10K/70B-L3.3-Cirrus-x1
44
+ - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
45
+ - model: SicariusSicariiStuff/Negative_LLAMA_70B
46
+ - model: TheDrummer/Anubis-70B-v1
47
+ - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
48
  merge_method: sce
49
+ base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
50
  parameters:
51
  select_topk: 0.17
52
  out_dtype: bfloat16