PseudoTerminal X
commited on
Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
@@ -149,8 +149,8 @@ You may reuse the base model text encoder for inference.
|
|
149 |
## Training settings
|
150 |
|
151 |
- Training epochs: 0
|
152 |
-
- Training steps:
|
153 |
-
- Learning rate:
|
154 |
- Effective batch size: 2
|
155 |
- Micro-batch size: 1
|
156 |
- Gradient accumulation steps: 2
|
@@ -159,6 +159,7 @@ You may reuse the base model text encoder for inference.
|
|
159 |
- Rescaled betas zero SNR: False
|
160 |
- Optimizer: AdamW, stochastic bf16
|
161 |
- Precision: Pure BF16
|
|
|
162 |
- Xformers: Not used
|
163 |
- LoRA Rank: 128
|
164 |
- LoRA Alpha: 128.0
|
|
|
149 |
## Training settings
|
150 |
|
151 |
- Training epochs: 0
|
152 |
+
- Training steps: 500
|
153 |
+
- Learning rate: 4e-05
|
154 |
- Effective batch size: 2
|
155 |
- Micro-batch size: 1
|
156 |
- Gradient accumulation steps: 2
|
|
|
159 |
- Rescaled betas zero SNR: False
|
160 |
- Optimizer: AdamW, stochastic bf16
|
161 |
- Precision: Pure BF16
|
162 |
+
- Quantised: No
|
163 |
- Xformers: Not used
|
164 |
- LoRA Rank: 128
|
165 |
- LoRA Alpha: 128.0
|