error577 commited on
Commit
47df85c
·
verified ·
1 Parent(s): acd84ed

End of training

Browse files
Files changed (3) hide show
  1. README.md +192 -0
  2. adapter_model.bin +3 -0
  3. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: other
4
+ base_model: facebook/opt-125m
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: 5f0159c4-1008-4527-9092-4ee6e6b9e663
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ adapter: qlora
22
+ auto_resume_from_checkpoints: true
23
+ base_model: facebook/opt-125m
24
+ bf16: auto
25
+ chat_template: llama3
26
+ dataloader_num_workers: 12
27
+ dataset_prepared_path: null
28
+ datasets:
29
+ - data_files:
30
+ - 47b36a24df61e9c5_train_data.json
31
+ ds_type: json
32
+ format: custom
33
+ path: /workspace/input_data/47b36a24df61e9c5_train_data.json
34
+ type:
35
+ field_input: documents
36
+ field_instruction: question
37
+ field_output: answer
38
+ format: '{instruction} {input}'
39
+ no_input_format: '{instruction}'
40
+ system_format: '{system}'
41
+ system_prompt: ''
42
+ debug: null
43
+ deepspeed: null
44
+ early_stopping_patience: 3
45
+ eval_max_new_tokens: 128
46
+ eval_steps: 50
47
+ eval_table_size: null
48
+ evals_per_epoch: null
49
+ flash_attention: true
50
+ fp16: null
51
+ fsdp: null
52
+ fsdp_config: null
53
+ gradient_accumulation_steps: 16
54
+ gradient_checkpointing: true
55
+ group_by_length: true
56
+ hub_model_id: error577/5f0159c4-1008-4527-9092-4ee6e6b9e663
57
+ hub_repo: null
58
+ hub_strategy: checkpoint
59
+ hub_token: null
60
+ learning_rate: 0.0003
61
+ load_in_4bit: true
62
+ load_in_8bit: false
63
+ local_rank: null
64
+ logging_steps: 1
65
+ lora_alpha: 128
66
+ lora_dropout: 0.3
67
+ lora_fan_in_fan_out: null
68
+ lora_model_dir: null
69
+ lora_r: 128
70
+ lora_target_linear: true
71
+ lr_scheduler: cosine
72
+ max_grad_norm: 1.0
73
+ max_steps: null
74
+ micro_batch_size: 1
75
+ mlflow_experiment_name: /tmp/47b36a24df61e9c5_train_data.json
76
+ model_type: AutoModelForCausalLM
77
+ num_epochs: 3
78
+ optimizer: adamw_bnb_8bit
79
+ output_dir: miner_id_24
80
+ pad_to_sequence_len: true
81
+ resume_from_checkpoint: null
82
+ s2_attention: null
83
+ sample_packing: false
84
+ save_steps: 50
85
+ sequence_len: 512
86
+ strict: false
87
+ tf32: false
88
+ tokenizer_type: AutoTokenizer
89
+ train_on_inputs: false
90
+ trust_remote_code: true
91
+ val_set_size: 0.02
92
+ wandb_entity: null
93
+ wandb_mode: online
94
+ wandb_name: a6924886-18eb-47b1-8a4b-24becc99648c
95
+ wandb_project: Gradients-On-Demand
96
+ wandb_run: your_name
97
+ wandb_runid: a6924886-18eb-47b1-8a4b-24becc99648c
98
+ warmup_steps: 10
99
+ weight_decay: 0.01
100
+ xformers_attention: null
101
+
102
+ ```
103
+
104
+ </details><br>
105
+
106
+ # 5f0159c4-1008-4527-9092-4ee6e6b9e663
107
+
108
+ This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
109
+ It achieves the following results on the evaluation set:
110
+ - Loss: 2.2857
111
+
112
+ ## Model description
113
+
114
+ More information needed
115
+
116
+ ## Intended uses & limitations
117
+
118
+ More information needed
119
+
120
+ ## Training and evaluation data
121
+
122
+ More information needed
123
+
124
+ ## Training procedure
125
+
126
+ ### Training hyperparameters
127
+
128
+ The following hyperparameters were used during training:
129
+ - learning_rate: 0.0003
130
+ - train_batch_size: 1
131
+ - eval_batch_size: 1
132
+ - seed: 42
133
+ - gradient_accumulation_steps: 16
134
+ - total_train_batch_size: 16
135
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
136
+ - lr_scheduler_type: cosine
137
+ - lr_scheduler_warmup_steps: 10
138
+ - num_epochs: 3
139
+
140
+ ### Training results
141
+
142
+ | Training Loss | Epoch | Step | Validation Loss |
143
+ |:-------------:|:------:|:----:|:---------------:|
144
+ | 47.7848 | 0.0004 | 1 | 3.0276 |
145
+ | 34.159 | 0.0202 | 50 | 2.9019 |
146
+ | 39.2367 | 0.0405 | 100 | 2.5922 |
147
+ | 52.0151 | 0.0607 | 150 | 2.5356 |
148
+ | 33.4878 | 0.0809 | 200 | 2.5099 |
149
+ | 25.6957 | 0.1012 | 250 | 2.4814 |
150
+ | 27.5454 | 0.1214 | 300 | 2.4519 |
151
+ | 32.6855 | 0.1417 | 350 | 2.4207 |
152
+ | 25.3411 | 0.1619 | 400 | 2.4211 |
153
+ | 27.4427 | 0.1821 | 450 | 2.4128 |
154
+ | 34.6101 | 0.2024 | 500 | 2.3944 |
155
+ | 23.8259 | 0.2226 | 550 | 2.3888 |
156
+ | 23.7378 | 0.2428 | 600 | 2.3808 |
157
+ | 27.431 | 0.2631 | 650 | 2.3735 |
158
+ | 26.069 | 0.2833 | 700 | 2.3755 |
159
+ | 20.5981 | 0.3035 | 750 | 2.3722 |
160
+ | 23.1821 | 0.3238 | 800 | 2.3646 |
161
+ | 20.5374 | 0.3440 | 850 | 2.3509 |
162
+ | 22.8665 | 0.3642 | 900 | 2.3556 |
163
+ | 21.9577 | 0.3845 | 950 | 2.3418 |
164
+ | 20.0986 | 0.4047 | 1000 | 2.3399 |
165
+ | 29.616 | 0.4250 | 1050 | 2.3433 |
166
+ | 25.8536 | 0.4452 | 1100 | 2.3335 |
167
+ | 18.732 | 0.4654 | 1150 | 2.3298 |
168
+ | 21.2083 | 0.4857 | 1200 | 2.3250 |
169
+ | 20.2594 | 0.5059 | 1250 | 2.3195 |
170
+ | 14.3002 | 0.5261 | 1300 | 2.3196 |
171
+ | 24.714 | 0.5464 | 1350 | 2.3132 |
172
+ | 22.0257 | 0.5666 | 1400 | 2.3093 |
173
+ | 16.7176 | 0.5868 | 1450 | 2.3012 |
174
+ | 15.5525 | 0.6071 | 1500 | 2.3052 |
175
+ | 20.5451 | 0.6273 | 1550 | 2.2970 |
176
+ | 31.716 | 0.6475 | 1600 | 2.2905 |
177
+ | 23.2587 | 0.6678 | 1650 | 2.2938 |
178
+ | 16.72 | 0.6880 | 1700 | 2.2914 |
179
+ | 19.7095 | 0.7083 | 1750 | 2.2868 |
180
+ | 25.7639 | 0.7285 | 1800 | 2.2802 |
181
+ | 30.8813 | 0.7487 | 1850 | 2.2860 |
182
+ | 25.8737 | 0.7690 | 1900 | 2.2825 |
183
+ | 21.8546 | 0.7892 | 1950 | 2.2857 |
184
+
185
+
186
+ ### Framework versions
187
+
188
+ - PEFT 0.13.2
189
+ - Transformers 4.46.0
190
+ - Pytorch 2.5.0+cu124
191
+ - Datasets 3.0.1
192
+ - Tokenizers 0.20.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdf648e20e878d7799c8afbe7e13c9192f6d11cb7f291bd97964b263a723d5a0
3
+ size 84987466
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d20bbeee19301505929e26996970f7416daa3c015a9a8eefc29b48275a8c80ab
3
  size 84954584
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc58c42178182a61d3b5a2f279be7ba11ae66d428ae82c14ce9963336770fc3c
3
  size 84954584