Weyaxi commited on
Commit
1edc6ab
·
verified ·
1 Parent(s): 7f18f99

Model save

Browse files
Files changed (2) hide show
  1. README.md +265 -0
  2. generation_config.json +7 -0
README.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: llama3.2
4
+ base_model: meta-llama/Llama-3.2-1B
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: Einstein-v8-Llama3.2-1B
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ base_model: meta-llama/Llama-3.2-1B
22
+ model_type: LlamaForCausalLM
23
+ tokenizer_type: AutoTokenizer
24
+
25
+ load_in_8bit: false
26
+ load_in_4bit: false
27
+ strict: false
28
+
29
+ chat_template: chatml
30
+ datasets:
31
+ - path: datasets/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json
32
+ ds_type: json
33
+ type: sharegpt
34
+ conversation: chatml
35
+
36
+ - path: datasets/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json
37
+ ds_type: json
38
+ type: sharegpt
39
+ strict: false
40
+ conversation: chatml
41
+
42
+ - path: datasets/buzz_unstacked_chosen_math_removed_filtered.json
43
+ ds_type: json
44
+ type: alpaca
45
+ conversation: chatml
46
+
47
+ - path: datasets/capybara_sharegpt.json
48
+ ds_type: json
49
+ type: sharegpt
50
+ conversation: chatml
51
+
52
+ - path: datasets/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json
53
+ ds_type: json
54
+ type: sharegpt
55
+ conversation: chatml
56
+
57
+ - path: datasets/everythinglm-data-v3_sharegpt.json
58
+ ds_type: json
59
+ type: sharegpt
60
+ strict: false
61
+ conversation: chatml
62
+
63
+ - path: datasets/gpt4_data_lmys_1m_sharegpt.json
64
+ ds_type: json
65
+ type: sharegpt
66
+ conversation: chatml
67
+
68
+ - path: datasets/gpteacher-instruct-special-alpaca.json
69
+ ds_type: json
70
+ type: gpteacher
71
+ conversation: chatml
72
+
73
+ - path: datasets/merged_all.json
74
+ ds_type: json
75
+ type: alpaca
76
+ conversation: chatml
77
+
78
+ - path: datasets/no_robots_sharegpt.json
79
+ ds_type: json
80
+ type: sharegpt
81
+ strict: false
82
+ conversation: chatml
83
+
84
+ - path: datasets/oasst_top1_from_fusechatmixture_sharegpt.json
85
+ ds_type: json
86
+ type: sharegpt
87
+ strict: false
88
+ conversation: chatml
89
+
90
+ - path: datasets/pippa_bagel_repo_3k_sharegpt.json
91
+ ds_type: json
92
+ type: sharegpt
93
+ conversation: chatml
94
+
95
+ - path: datasets/rpguild_quarter_alignment_lab_sharegpt.json
96
+ ds_type: json
97
+ type: sharegpt
98
+ conversation: chatml
99
+
100
+ - path: datasets/sharegpt_gpt4_english.json
101
+ ds_type: json
102
+ type: sharegpt
103
+ conversation: chatml
104
+
105
+ - path: datasets/slimorca_dedup_filtered_95k_sharegpt.json
106
+ ds_type: json
107
+ type: sharegpt
108
+ conversation: chatml
109
+
110
+ - path: datasets/soda_diaolog_longest_tenth_buzz_sharegpt.json
111
+ ds_type: json
112
+ type: sharegpt
113
+ conversation: chatml
114
+
115
+ - path: datasets/synthia-v1.3_sharegpt_12500.json
116
+ ds_type: json
117
+ type: sharegpt
118
+ conversation: chatml
119
+
120
+ - path: datasets/system_conversations_dolphin_sharegpt.json
121
+ ds_type: json
122
+ type: sharegpt
123
+ conversation: chatml
124
+
125
+ - path: datasets/NuminaMath-CoT-olympiads-40k_alpaca.json
126
+ ds_type: json
127
+ type: alpaca
128
+ conversation: chatml
129
+
130
+ - path: datasets/math-gpt-4o-40k_alpaca.json
131
+ ds_type: json
132
+ type: alpaca
133
+ conversation: chatml
134
+
135
+ - path: datasets/sonnet3.5_science_conversations_sharegpt.json
136
+ ds_type: json
137
+ type: sharegpt
138
+ conversation: chatml
139
+
140
+ - path: datasets/reasoning-0.01_sharegpt.jsonl
141
+ ds_type: json
142
+ type: sharegpt
143
+ conversation: chatml
144
+
145
+ dataset_prepared_path: last_run_prepared
146
+ val_set_size: 0.002
147
+
148
+ output_dir: ./Einstein-v8-Llama3.2-1B-model
149
+
150
+ sequence_len: 8192
151
+ sample_packing: true
152
+ pad_to_sequence_len: true
153
+ eval_sample_packing: false
154
+
155
+ wandb_project: Einstein
156
+ wandb_entity:
157
+ wandb_watch:
158
+ wandb_name: Einstein-v8-Llama3.2-1B-2-epoch
159
+ wandb_log_model:
160
+ hub_model_id: Weyaxi/Einstein-v8-Llama3.2-1B
161
+
162
+ save_safetensors: true
163
+
164
+ gradient_accumulation_steps: 4
165
+ micro_batch_size: 4
166
+ num_epochs: 2
167
+ optimizer: adamw_bnb_8bit # look
168
+ lr_scheduler: cosine
169
+ learning_rate: 0.000005 # look
170
+
171
+ train_on_inputs: false
172
+ group_by_length: false
173
+ bf16: true
174
+ fp16: false
175
+ tf32: false
176
+
177
+ gradient_checkpointing: true
178
+ early_stopping_patience:
179
+ resume_from_checkpoint:
180
+ local_rank:
181
+ logging_steps: 1
182
+ xformers_attention:
183
+ flash_attention: true
184
+
185
+ warmup_steps: 10
186
+ evals_per_epoch: 4
187
+ eval_table_size:
188
+ eval_table_max_new_tokens: 128
189
+ saves_per_epoch: 1
190
+ debug:
191
+
192
+ deepspeed: axolotl/deepspeed_configs/zero3_bf16.json
193
+ weight_decay: 0.0
194
+ fsdp:
195
+ fsdp_config:
196
+ special_tokens:
197
+ bos_token: "<s>"
198
+ eos_token: "<|im_end|>"
199
+ unk_token: "<unk>"
200
+ pad_token: <|end_of_text|> # changed
201
+ tokens:
202
+ - "<|im_start|>"
203
+
204
+ ```
205
+
206
+ </details><br>
207
+
208
+ # Einstein-v8-Llama3.2-1B
209
+
210
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
211
+ It achieves the following results on the evaluation set:
212
+ - Loss: 0.9292
213
+
214
+ ## Model description
215
+
216
+ More information needed
217
+
218
+ ## Intended uses & limitations
219
+
220
+ More information needed
221
+
222
+ ## Training and evaluation data
223
+
224
+ More information needed
225
+
226
+ ## Training procedure
227
+
228
+ ### Training hyperparameters
229
+
230
+ The following hyperparameters were used during training:
231
+ - learning_rate: 5e-06
232
+ - train_batch_size: 4
233
+ - eval_batch_size: 4
234
+ - seed: 42
235
+ - distributed_type: multi-GPU
236
+ - num_devices: 4
237
+ - gradient_accumulation_steps: 4
238
+ - total_train_batch_size: 64
239
+ - total_eval_batch_size: 16
240
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
241
+ - lr_scheduler_type: cosine
242
+ - lr_scheduler_warmup_steps: 10
243
+ - num_epochs: 2
244
+
245
+ ### Training results
246
+
247
+ | Training Loss | Epoch | Step | Validation Loss |
248
+ |:-------------:|:------:|:----:|:---------------:|
249
+ | 1.4261 | 0.0009 | 1 | 1.4028 |
250
+ | 1.0487 | 0.2501 | 268 | 0.9917 |
251
+ | 1.0484 | 0.5001 | 536 | 0.9652 |
252
+ | 1.0039 | 0.7502 | 804 | 0.9499 |
253
+ | 1.0528 | 1.0002 | 1072 | 0.9399 |
254
+ | 0.9559 | 1.2481 | 1340 | 0.9345 |
255
+ | 0.9078 | 1.4981 | 1608 | 0.9309 |
256
+ | 0.9702 | 1.7481 | 1876 | 0.9295 |
257
+ | 0.929 | 1.9981 | 2144 | 0.9292 |
258
+
259
+
260
+ ### Framework versions
261
+
262
+ - Transformers 4.45.0
263
+ - Pytorch 2.3.1+cu121
264
+ - Datasets 2.21.0
265
+ - Tokenizers 0.20.0
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": 128001,
6
+ "transformers_version": "4.45.0"
7
+ }