Update README.md
Browse files
README.md
CHANGED
@@ -109,15 +109,13 @@ Efficient Fine-tuning Method: LoRA (Low-Rank Adaptation)
|
|
109 |
|
110 |
#### Training Hyperparameters
|
111 |
|
112 |
-
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
LoRA Rank: 16
|
120 |
-
LoRA Alpha: 32
|
121 |
|
122 |
|
123 |
#### Speeds, Sizes, Times [optional]
|
|
|
109 |
|
110 |
#### Training Hyperparameters
|
111 |
|
112 |
+
- Learning Rate: 2e-5
|
113 |
+
- Batch Size: 4
|
114 |
+
- Gradient Accumulation Steps: 4
|
115 |
+
- Training Steps: 500
|
116 |
+
- Warmup Steps: 20
|
117 |
+
- LoRA Rank: 16
|
118 |
+
- LoRA Alpha: 32
|
|
|
|
|
119 |
|
120 |
|
121 |
#### Speeds, Sizes, Times [optional]
|