chauhoang commited on
Commit
0c7385b
·
verified ·
1 Parent(s): 9bf1363

End of training

Browse files
Files changed (2) hide show
  1. README.md +4 -11
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -64,7 +64,7 @@ lora_model_dir: null
64
  lora_r: 8
65
  lora_target_linear: true
66
  lr_scheduler: cosine
67
- max_steps: 50
68
  micro_batch_size: 2
69
  mlflow_experiment_name: /tmp/3547704a22c5b5a5_train_data.json
70
  model_type: AutoModelForCausalLM
@@ -91,7 +91,7 @@ wandb_name: 8a27943e-4c32-44a9-b580-9571b022d880
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
  wandb_runid: 8a27943e-4c32-44a9-b580-9571b022d880
94
- warmup_steps: 10
95
  weight_decay: 0.0
96
  xformers_attention: null
97
 
@@ -102,8 +102,6 @@ xformers_attention: null
102
  # 8a27943e-4c32-44a9-b580-9571b022d880
103
 
104
  This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
105
- It achieves the following results on the evaluation set:
106
- - Loss: 10.3172
107
 
108
  ## Model description
109
 
@@ -130,19 +128,14 @@ The following hyperparameters were used during training:
130
  - total_train_batch_size: 8
131
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
- - lr_scheduler_warmup_steps: 10
134
- - training_steps: 50
135
 
136
  ### Training results
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
  | No log | 0.0001 | 1 | 10.3366 |
141
- | 10.333 | 0.0012 | 10 | 10.3332 |
142
- | 10.3255 | 0.0023 | 20 | 10.3258 |
143
- | 10.321 | 0.0035 | 30 | 10.3203 |
144
- | 10.3165 | 0.0047 | 40 | 10.3177 |
145
- | 10.3149 | 0.0059 | 50 | 10.3172 |
146
 
147
 
148
  ### Framework versions
 
64
  lora_r: 8
65
  lora_target_linear: true
66
  lr_scheduler: cosine
67
+ max_steps: 1
68
  micro_batch_size: 2
69
  mlflow_experiment_name: /tmp/3547704a22c5b5a5_train_data.json
70
  model_type: AutoModelForCausalLM
 
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
  wandb_runid: 8a27943e-4c32-44a9-b580-9571b022d880
94
+ warmup_steps: 1
95
  weight_decay: 0.0
96
  xformers_attention: null
97
 
 
102
  # 8a27943e-4c32-44a9-b580-9571b022d880
103
 
104
  This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
 
 
105
 
106
  ## Model description
107
 
 
128
  - total_train_batch_size: 8
129
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
130
  - lr_scheduler_type: cosine
131
+ - lr_scheduler_warmup_steps: 2
132
+ - training_steps: 1
133
 
134
  ### Training results
135
 
136
  | Training Loss | Epoch | Step | Validation Loss |
137
  |:-------------:|:------:|:----:|:---------------:|
138
  | No log | 0.0001 | 1 | 10.3366 |
 
 
 
 
 
139
 
140
 
141
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f553ab5d5a4ed2ad2bb125fe733e891f3663a7fce808e4e7da8380f357f10902
3
  size 33666
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:241d5683ff72637324779991bd9688a9d97d6ca5d3921faacf9080b9b333c7bc
3
  size 33666