Steelskull commited on
Commit
20fd26c
·
verified ·
1 Parent(s): 84c6752

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -21
README.md CHANGED
@@ -1,31 +1,29 @@
1
- ---
2
- base_model:
3
- - meta-llama/Meta-Llama-3-8B-Instruct
4
- library_name: transformers
5
- tags:
6
- - mergekit
7
- - merge
8
 
9
- ---
10
- # merge
11
 
12
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
13
 
14
- ## Merge Details
15
- ### Merge Method
16
 
17
- This model was merged using the passthrough merge method.
18
-
19
- ### Models Merged
20
-
21
- The following models were included in the merge:
22
  * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
23
 
24
- ### Configuration
 
 
 
 
 
 
 
 
25
 
26
- The following YAML configuration was used to produce this model:
27
 
28
- ```yaml
29
  slices:
30
  - sources:
31
  - model: meta-llama/Meta-Llama-3-8B-Instruct
@@ -35,4 +33,4 @@ slices:
35
  layer_range: [7, 31]
36
  merge_method: passthrough
37
  dtype: bfloat16
38
- ```
 
1
+ # Aura-llama
 
 
 
 
 
 
2
 
3
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/QYpWMEXTe0_X3A7HyeBm0.webp)
4
+ Now that the cute anime girl has your attention.
5
 
6
+ Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining.
7
+ Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.
8
 
 
 
9
 
10
+ Aura-llama is a merge of the following models to create a base model to work from:
11
+ * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
 
 
 
12
  * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
13
 
14
+ ## Merged Evals: (Has Not Been Finetuned)
15
+ Aura-llama
16
+ * Avg: ?
17
+ * ARC: ?
18
+ * HellaSwag: ?
19
+ * MMLU: ?
20
+ * T-QA: ?
21
+ * Winogrande: ?
22
+ * GSM8K: ?
23
 
24
+ ## 🧩 Configuration
25
 
26
+ ```
27
  slices:
28
  - sources:
29
  - model: meta-llama/Meta-Llama-3-8B-Instruct
 
33
  layer_range: [7, 31]
34
  merge_method: passthrough
35
  dtype: bfloat16
36
+ ```