Mdean77 commited on
Commit
eca6fdd
·
verified ·
1 Parent(s): a8ae27b

mdean77/llama381binstruct_summarize_short_challenge

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,83 +1,58 @@
1
  ---
2
  base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
3
- datasets:
4
- - generator
5
- library_name: peft
6
- license: llama3.1
7
  tags:
 
8
  - trl
9
  - sft
10
- - generated_from_trainer
11
- model-index:
12
- - name: llama381binstruct_summarize_short
13
- results: []
14
  ---
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
-
19
- # llama381binstruct_summarize_short
20
-
21
- This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct) on the generator dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 2.2429
24
 
25
- ## Model description
 
26
 
27
- More information needed
28
 
29
- ## Intended uses & limitations
 
30
 
31
- More information needed
 
 
 
 
32
 
33
- ## Training and evaluation data
34
 
35
- More information needed
36
 
37
- ## Training procedure
38
 
39
- ### Training hyperparameters
40
 
41
- The following hyperparameters were used during training:
42
- - learning_rate: 0.0002
43
- - train_batch_size: 1
44
- - eval_batch_size: 8
45
- - seed: 42
46
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
- - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 30
49
- - training_steps: 500
50
 
51
- ### Training results
 
 
 
 
52
 
53
- | Training Loss | Epoch | Step | Validation Loss |
54
- |:-------------:|:-------:|:----:|:---------------:|
55
- | 1.653 | 1.3158 | 25 | 1.3192 |
56
- | 0.7031 | 2.6316 | 50 | 1.3904 |
57
- | 0.3683 | 3.9474 | 75 | 1.5407 |
58
- | 0.1658 | 5.2632 | 100 | 1.9100 |
59
- | 0.0716 | 6.5789 | 125 | 1.9351 |
60
- | 0.036 | 7.8947 | 150 | 1.9408 |
61
- | 0.0372 | 9.2105 | 175 | 1.9649 |
62
- | 0.01 | 10.5263 | 200 | 2.1079 |
63
- | 0.009 | 11.8421 | 225 | 2.1175 |
64
- | 0.0144 | 13.1579 | 250 | 2.0791 |
65
- | 0.0064 | 14.4737 | 275 | 2.0624 |
66
- | 0.0048 | 15.7895 | 300 | 2.1707 |
67
- | 0.0039 | 17.1053 | 325 | 2.0981 |
68
- | 0.0026 | 18.4211 | 350 | 2.1469 |
69
- | 0.0021 | 19.7368 | 375 | 2.1868 |
70
- | 0.0021 | 21.0526 | 400 | 2.2096 |
71
- | 0.0018 | 22.3684 | 425 | 2.2259 |
72
- | 0.0017 | 23.6842 | 450 | 2.2357 |
73
- | 0.0016 | 25.0 | 475 | 2.2411 |
74
- | 0.0019 | 26.3158 | 500 | 2.2429 |
75
 
76
 
77
- ### Framework versions
78
 
79
- - PEFT 0.12.0
80
- - Transformers 4.44.2
81
- - Pytorch 2.4.0+cu121
82
- - Datasets 3.0.0
83
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
3
+ library_name: transformers
4
+ model_name: llama381binstruct_summarize_short
 
 
5
  tags:
6
+ - generated_from_trainer
7
  - trl
8
  - sft
9
+ licence: license
 
 
 
10
  ---
11
 
12
+ # Model Card for llama381binstruct_summarize_short
 
 
 
 
 
 
 
13
 
14
+ This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="Mdean77/llama381binstruct_summarize_short", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
+ ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/miketraindoc-university-of-utah/huggingface/runs/letgphjl)
31
 
 
32
 
33
+ This model was trained with SFT.
34
 
35
+ ### Framework versions
 
 
 
 
 
 
 
 
36
 
37
+ - TRL: 0.16.0
38
+ - Transformers: 4.50.1
39
+ - Pytorch: 2.6.0+cu124
40
+ - Datasets: 3.4.1
41
+ - Tokenizers: 0.21.1
42
 
43
+ ## Citations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
 
 
46
 
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @misc{vonwerra2022trl,
51
+ title = {{TRL: Transformer Reinforcement Learning}},
52
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
53
+ year = 2020,
54
+ journal = {GitHub repository},
55
+ publisher = {GitHub},
56
+ howpublished = {\url{https://github.com/huggingface/trl}}
57
+ }
58
+ ```
adapter_config.json CHANGED
@@ -3,6 +3,9 @@
3
  "auto_mapping": null,
4
  "base_model_name_or_path": "NousResearch/Meta-Llama-3.1-8B-Instruct",
5
  "bias": "none",
 
 
 
6
  "fan_in_fan_out": false,
7
  "inference_mode": true,
8
  "init_lora_weights": true,
@@ -11,6 +14,7 @@
11
  "layers_to_transform": null,
12
  "loftq_config": {},
13
  "lora_alpha": 32,
 
14
  "lora_dropout": 0.1,
15
  "megatron_config": null,
16
  "megatron_core": "megatron.core",
@@ -20,15 +24,16 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
 
23
  "gate_proj",
24
- "v_proj",
25
- "up_proj",
26
  "down_proj",
27
- "k_proj",
28
- "q_proj",
29
- "o_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
 
32
  "use_dora": false,
33
  "use_rslora": false
34
  }
 
3
  "auto_mapping": null,
4
  "base_model_name_or_path": "NousResearch/Meta-Llama-3.1-8B-Instruct",
5
  "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
  "fan_in_fan_out": false,
10
  "inference_mode": true,
11
  "init_lora_weights": true,
 
14
  "layers_to_transform": null,
15
  "loftq_config": {},
16
  "lora_alpha": 32,
17
+ "lora_bias": false,
18
  "lora_dropout": 0.1,
19
  "megatron_config": null,
20
  "megatron_core": "megatron.core",
 
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
27
+ "q_proj",
28
+ "k_proj",
29
  "gate_proj",
 
 
30
  "down_proj",
31
+ "up_proj",
32
+ "o_proj",
33
+ "v_proj"
34
  ],
35
  "task_type": "CAUSAL_LM",
36
+ "trainable_token_indices": null,
37
  "use_dora": false,
38
  "use_rslora": false
39
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:60d95b10b6e140a9626a7058d5038528f2ff80148dc4569b881db56052046509
3
- size 40
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d70a33f6a812494c6b4b8f0d2f85cb82f04521181abd32a67c1dd3b06355f201
3
+ size 167832240
runs/Mar26_16-54-03_61f7f7d06b80/events.out.tfevents.1743008130.61f7f7d06b80.1273.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79413744c47653aae369b4649572223de50177cf98c0c3cd03367f325d963f87
3
+ size 29690
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -2053,11 +2053,12 @@
2053
  "chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}",
2054
  "clean_up_tokenization_spaces": true,
2055
  "eos_token": "<|eot_id|>",
 
2056
  "model_input_names": [
2057
  "input_ids",
2058
  "attention_mask"
2059
  ],
2060
  "model_max_length": 131072,
2061
  "pad_token": "<|eot_id|>",
2062
- "tokenizer_class": "PreTrainedTokenizerFast"
2063
  }
 
2053
  "chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}",
2054
  "clean_up_tokenization_spaces": true,
2055
  "eos_token": "<|eot_id|>",
2056
+ "extra_special_tokens": {},
2057
  "model_input_names": [
2058
  "input_ids",
2059
  "attention_mask"
2060
  ],
2061
  "model_max_length": 131072,
2062
  "pad_token": "<|eot_id|>",
2063
+ "tokenizer_class": "PreTrainedTokenizer"
2064
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6daebb1eaa753f4a52e454faf9b635a703b31c05e8c966c3fd4586a3d1801d1
3
- size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:317a50ee638041f79d369f632af1b6919038101379555ba6b0b86cc6378dc319
3
+ size 5688