Update README.md
Browse files
README.md
CHANGED
@@ -7,10 +7,24 @@ library_name: peft
|
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
### Model Description
|
15 |
|
16 |
<!-- Provide a longer summary of what this model is. -->
|
|
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
+
Finetune base model [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) using [alpaca-lora](https://github.com/tloen/alpaca-lora/tree/main) .
|
11 |
|
12 |
+
- Aiming to use this as a peft model for further fine-tuning tasks.
|
13 |
|
14 |
## Model Details
|
15 |
|
16 |
+
Here are the traning parameters
|
17 |
+
- base_model 'TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T'
|
18 |
+
- yahma/alpaca-cleaned
|
19 |
+
- lora_r 16
|
20 |
+
- lora_alpha 16
|
21 |
+
- lora_dropout 0.05
|
22 |
+
- lora_target_modules '[q_proj, k_proj, v_proj, o_proj]'
|
23 |
+
|
24 |
+
|
25 |
+
Took 6-7 hours on a single A5000 GPU (lots of issues arise when trying to use multiple GPUs).
|
26 |
+
|
27 |
+
|
28 |
### Model Description
|
29 |
|
30 |
<!-- Provide a longer summary of what this model is. -->
|