rshacter commited on
Commit
a4d9e58
·
verified ·
1 Parent(s): 9c45140

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -9
README.md CHANGED
@@ -109,15 +109,13 @@ Efficient Fine-tuning Method: LoRA (Low-Rank Adaptation)
109
 
110
  #### Training Hyperparameters
111
 
112
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
113
-
114
- Learning Rate: 2e-5
115
- Batch Size: 4
116
- Gradient Accumulation Steps: 4
117
- Training Steps: 500
118
- Warmup Steps: 20
119
- LoRA Rank: 16
120
- LoRA Alpha: 32
121
 
122
 
123
  #### Speeds, Sizes, Times [optional]
 
109
 
110
  #### Training Hyperparameters
111
 
112
+ - Learning Rate: 2e-5
113
+ - Batch Size: 4
114
+ - Gradient Accumulation Steps: 4
115
+ - Training Steps: 500
116
+ - Warmup Steps: 20
117
+ - LoRA Rank: 16
118
+ - LoRA Alpha: 32
 
 
119
 
120
 
121
  #### Speeds, Sizes, Times [optional]