Kareem Amr
commited on
End of training
Browse files- README.md +26 -25
- adapter_model.bin +1 -1
README.md
CHANGED
@@ -2,10 +2,11 @@
|
|
2 |
license: apache-2.0
|
3 |
library_name: peft
|
4 |
tags:
|
|
|
5 |
- generated_from_trainer
|
6 |
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
7 |
model-index:
|
8 |
-
- name:
|
9 |
results: []
|
10 |
---
|
11 |
|
@@ -17,13 +18,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
17 |
|
18 |
axolotl version: `0.4.0`
|
19 |
```yaml
|
20 |
-
#
|
21 |
-
|
22 |
|
23 |
-
#
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
|
28 |
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
29 |
model_type: LlamaForCausalLM
|
@@ -88,11 +89,11 @@ special_tokens:
|
|
88 |
|
89 |
</details><br>
|
90 |
|
91 |
-
#
|
92 |
|
93 |
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
|
94 |
It achieves the following results on the evaluation set:
|
95 |
-
- Loss: 1.
|
96 |
|
97 |
## Model description
|
98 |
|
@@ -127,22 +128,22 @@ The following hyperparameters were used during training:
|
|
127 |
| Training Loss | Epoch | Step | Validation Loss |
|
128 |
|:-------------:|:------:|:----:|:---------------:|
|
129 |
| 1.4615 | 0.08 | 1 | 1.4899 |
|
130 |
-
| 1.
|
131 |
-
| 1.
|
132 |
-
| 1.
|
133 |
-
| 1.
|
134 |
-
| 1.
|
135 |
-
| 1.
|
136 |
-
| 1.
|
137 |
-
| 1.
|
138 |
-
| 1.
|
139 |
-
| 1.1515 | 2.32 | 30 | 1.
|
140 |
-
| 1.
|
141 |
-
| 1.
|
142 |
-
| 1.
|
143 |
-
| 1.
|
144 |
-
| 1.1002 | 3.48 | 45 | 1.
|
145 |
-
| 1.
|
146 |
|
147 |
|
148 |
### Framework versions
|
|
|
2 |
license: apache-2.0
|
3 |
library_name: peft
|
4 |
tags:
|
5 |
+
- axolotl
|
6 |
- generated_from_trainer
|
7 |
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
8 |
model-index:
|
9 |
+
- name: tinyllama-1.1B_alpaca_2k_lora
|
10 |
results: []
|
11 |
---
|
12 |
|
|
|
18 |
|
19 |
axolotl version: `0.4.0`
|
20 |
```yaml
|
21 |
+
# Upload the final model to Huggingface
|
22 |
+
hub_model_id: kareemamrr/tinyllama-1.1B_alpaca_2k_lora
|
23 |
|
24 |
+
# Store the training logs in weights and biases
|
25 |
+
wandb_entity: kamr54
|
26 |
+
wandb_project: tinyllama-1.1B_alpaca_2k_peft
|
27 |
+
wandb_name: lora-run
|
28 |
|
29 |
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
30 |
model_type: LlamaForCausalLM
|
|
|
89 |
|
90 |
</details><br>
|
91 |
|
92 |
+
# tinyllama-1.1B_alpaca_2k_lora
|
93 |
|
94 |
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
|
95 |
It achieves the following results on the evaluation set:
|
96 |
+
- Loss: 1.2127
|
97 |
|
98 |
## Model description
|
99 |
|
|
|
128 |
| Training Loss | Epoch | Step | Validation Loss |
|
129 |
|:-------------:|:------:|:----:|:---------------:|
|
130 |
| 1.4615 | 0.08 | 1 | 1.4899 |
|
131 |
+
| 1.3847 | 0.24 | 3 | 1.4865 |
|
132 |
+
| 1.3673 | 0.48 | 6 | 1.4376 |
|
133 |
+
| 1.2673 | 0.72 | 9 | 1.3401 |
|
134 |
+
| 1.2257 | 0.96 | 12 | 1.2967 |
|
135 |
+
| 1.2511 | 1.16 | 15 | 1.2835 |
|
136 |
+
| 1.2267 | 1.4 | 18 | 1.2501 |
|
137 |
+
| 1.1348 | 1.6400 | 21 | 1.2330 |
|
138 |
+
| 1.2699 | 1.88 | 24 | 1.2276 |
|
139 |
+
| 1.1486 | 2.08 | 27 | 1.2258 |
|
140 |
+
| 1.1515 | 2.32 | 30 | 1.2224 |
|
141 |
+
| 1.1949 | 2.56 | 33 | 1.2175 |
|
142 |
+
| 1.1127 | 2.8 | 36 | 1.2158 |
|
143 |
+
| 1.1506 | 3.04 | 39 | 1.2126 |
|
144 |
+
| 1.1886 | 3.24 | 42 | 1.2110 |
|
145 |
+
| 1.1002 | 3.48 | 45 | 1.2106 |
|
146 |
+
| 1.1894 | 3.7200 | 48 | 1.2127 |
|
147 |
|
148 |
|
149 |
### Framework versions
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 101036698
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4b4c262a5d4b19857dc9a167ad62c6edc069c36aa01804e19b7e0c13a86a295b
|
3 |
size 101036698
|