SoundsFun commited on
Commit
16946a3
·
verified ·
1 Parent(s): 5b05e56

End of training

Browse files
Files changed (2) hide show
  1. README.md +43 -46
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -1,60 +1,57 @@
1
- ---
2
- library_name: peft
3
- license: bigcode-openrail-m
4
- base_model: bigcode/starcoderbase-1b
5
- tags:
6
- - generated_from_trainer
7
- model-index:
8
- - name: peft-starcoder-lora-a100
9
- results: []
10
- ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
- # peft-starcoder-lora-a100
 
16
 
17
- This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - eval_loss: 1.0495
20
- - eval_runtime: 978.1933
21
- - eval_samples_per_second: 8.966
22
- - eval_steps_per_second: 1.121
23
- - epoch: 0.6
24
- - step: 1200
25
 
26
- ## Model description
 
27
 
28
- More information needed
 
 
 
 
29
 
30
- ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
- ## Training and evaluation data
35
 
36
- More information needed
37
 
38
- ## Training procedure
 
 
 
 
39
 
40
- ### Training hyperparameters
41
 
42
- The following hyperparameters were used during training:
43
- - learning_rate: 0.0005
44
- - train_batch_size: 8
45
- - eval_batch_size: 8
46
- - seed: 42
47
- - gradient_accumulation_steps: 2
48
- - total_train_batch_size: 16
49
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
- - lr_scheduler_type: cosine
51
- - lr_scheduler_warmup_steps: 30
52
- - training_steps: 2000
53
 
54
- ### Framework versions
55
 
56
- - PEFT 0.14.0
57
- - Transformers 4.49.0
58
- - Pytorch 2.5.1
59
- - Datasets 3.3.2
60
- - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ybelkada/falcon-7b-sharded-bf16
3
+ library_name: transformers
4
+ model_name: peft-starcoder-lora-a100
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - sft
9
+ licence: license
10
+ ---
11
 
12
+ # Model Card for peft-starcoder-lora-a100
 
13
 
14
+ This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
 
 
 
 
 
 
 
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="SoundsFun/peft-starcoder-lora-a100", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
+ ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pasechnikm-mephi/huggingface/runs/ke1xns28)
31
 
32
+ This model was trained with SFT.
33
 
34
+ ### Framework versions
35
 
36
+ - TRL: 0.12.0
37
+ - Transformers: 4.49.0
38
+ - Pytorch: 2.5.1
39
+ - Datasets: 3.4.0
40
+ - Tokenizers: 0.21.0
41
 
42
+ ## Citations
43
 
 
 
 
 
 
 
 
 
 
 
 
44
 
 
45
 
46
+ Cite TRL as:
47
+
48
+ ```bibtex
49
+ @misc{vonwerra2022trl,
50
+ title = {{TRL: Transformer Reinforcement Learning}},
51
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouГ©dec},
52
+ year = 2020,
53
+ journal = {GitHub repository},
54
+ publisher = {GitHub},
55
+ howpublished = {\url{https://github.com/huggingface/trl}}
56
+ }
57
+ ```
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c97c6c5240cc5320d96731f4db55dd6944947b9d53bf8f88cd392f72f70ee9d
3
  size 261131840
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf78f7aa6af00dae63a39eed8db459f48adc33b3d3934d0a4dca159dfdae0c8e
3
  size 261131840