SystemAdmin123 commited on
Commit
354a520
·
verified ·
1 Parent(s): ea7e47e

End of training

Browse files
Files changed (2) hide show
  1. README.md +126 -0
  2. generation_config.json +8 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: fxmarty/tiny-llama-fast-tokenizer
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ datasets:
8
+ - argilla/databricks-dolly-15k-curated-en
9
+ model-index:
10
+ - name: tiny-llama-fast-tokenizer
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
18
+ <details><summary>See axolotl config</summary>
19
+
20
+ axolotl version: `0.6.0`
21
+ ```yaml
22
+ base_model: fxmarty/tiny-llama-fast-tokenizer
23
+ batch_size: 128
24
+ bf16: true
25
+ chat_template: tokenizer_default_fallback_alpaca
26
+ datasets:
27
+ - format: custom
28
+ path: argilla/databricks-dolly-15k-curated-en
29
+ type:
30
+ field_input: original-instruction
31
+ field_instruction: original-instruction
32
+ field_output: original-response
33
+ format: '{instruction} {input}'
34
+ no_input_format: '{instruction}'
35
+ system_format: '{system}'
36
+ system_prompt: ''
37
+ device_map: auto
38
+ eval_sample_packing: false
39
+ eval_steps: 200
40
+ flash_attention: true
41
+ gradient_checkpointing: true
42
+ group_by_length: true
43
+ hub_model_id: SystemAdmin123/tiny-llama-fast-tokenizer
44
+ hub_strategy: checkpoint
45
+ learning_rate: 0.0002
46
+ logging_steps: 10
47
+ lr_scheduler: cosine
48
+ max_steps: 10000
49
+ micro_batch_size: 32
50
+ model_type: AutoModelForCausalLM
51
+ num_epochs: 100
52
+ optimizer: adamw_bnb_8bit
53
+ output_dir: /root/.sn56/axolotl/tmp/tiny-llama-fast-tokenizer
54
+ pad_to_sequence_len: true
55
+ resize_token_embeddings_to_32x: false
56
+ sample_packing: true
57
+ save_steps: 200
58
+ save_total_limit: 1
59
+ sequence_len: 2048
60
+ special_tokens:
61
+ pad_token: </s>
62
+ tokenizer_type: LlamaTokenizerFast
63
+ torch_dtype: bf16
64
+ training_args_kwargs:
65
+ hub_private_repo: true
66
+ trust_remote_code: true
67
+ val_set_size: 0.1
68
+ wandb_entity: ''
69
+ wandb_mode: online
70
+ wandb_name: fxmarty/tiny-llama-fast-tokenizer-argilla/databricks-dolly-15k-curated-en
71
+ wandb_project: Gradients-On-Demand
72
+ wandb_run: your_name
73
+ wandb_runid: default
74
+ warmup_ratio: 0.05
75
+
76
+ ```
77
+
78
+ </details><br>
79
+
80
+ # tiny-llama-fast-tokenizer
81
+
82
+ This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the argilla/databricks-dolly-15k-curated-en dataset.
83
+
84
+ ## Model description
85
+
86
+ More information needed
87
+
88
+ ## Intended uses & limitations
89
+
90
+ More information needed
91
+
92
+ ## Training and evaluation data
93
+
94
+ More information needed
95
+
96
+ ## Training procedure
97
+
98
+ ### Training hyperparameters
99
+
100
+ The following hyperparameters were used during training:
101
+ - learning_rate: 0.0002
102
+ - train_batch_size: 32
103
+ - eval_batch_size: 32
104
+ - seed: 42
105
+ - distributed_type: multi-GPU
106
+ - num_devices: 4
107
+ - total_train_batch_size: 128
108
+ - total_eval_batch_size: 128
109
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
110
+ - lr_scheduler_type: cosine
111
+ - lr_scheduler_warmup_steps: 5
112
+ - training_steps: 100
113
+
114
+ ### Training results
115
+
116
+ | Training Loss | Epoch | Step | Validation Loss |
117
+ |:-------------:|:------:|:----:|:---------------:|
118
+ | No log | 0.1667 | 1 | 10.3764 |
119
+
120
+
121
+ ### Framework versions
122
+
123
+ - Transformers 4.48.1
124
+ - Pytorch 2.5.1+cu124
125
+ - Datasets 3.2.0
126
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "do_sample": true,
5
+ "eos_token_id": 1,
6
+ "pad_token_id": 1,
7
+ "transformers_version": "4.48.1"
8
+ }