NarendraK1 commited on
Commit
d03a167
·
verified ·
1 Parent(s): 8b8fd02

Model save

Browse files
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: codellama/CodeLlama-7b-Instruct-hf
3
+ library_name: transformers
4
+ model_name: CodeLlama-7B-CodeForces-CoTs-SFT
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - sft
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for CodeLlama-7B-CodeForces-CoTs-SFT
13
+
14
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="NarendraK1/CodeLlama-7B-CodeForces-CoTs-SFT", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chirru2004-kle-tech/huggingface/runs/jo5wvm4z)
31
+
32
+
33
+ This model was trained with SFT.
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.18.0.dev0
38
+ - Transformers: 4.52.0.dev0
39
+ - Pytorch: 2.6.0
40
+ - Datasets: 3.5.1
41
+ - Tokenizers: 0.21.1
42
+
43
+ ## Citations
44
+
45
+
46
+
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @misc{vonwerra2022trl,
51
+ title = {{TRL: Transformer Reinforcement Learning}},
52
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
53
+ year = 2020,
54
+ journal = {GitHub repository},
55
+ publisher = {GitHub},
56
+ howpublished = {\url{https://github.com/huggingface/trl}}
57
+ }
58
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 3.0485438709270315e+19,
3
+ "train_loss": 0.06383175736532645,
4
+ "train_runtime": 10917.8223,
5
+ "train_samples": 40665,
6
+ "train_samples_per_second": 11.174,
7
+ "train_steps_per_second": 0.931
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.52.0.dev0"
6
+ }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8ffa922766a1af39b16a21876cfc625c571c067603e60b0de3aaa0c39eb53cb3
3
  size 4939116424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8023770acedc7d644fe95281fd75a80965f612a03b66d89c31d003edc358fda9
3
  size 4939116424
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c70b78a718a01c452256011eb2aad9533fc693f16faac698cbe66691805f61fe
3
  size 4947390880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69be68ae5a7a0cb2f1f6f849800dd9b847ef5ee00a63b2ec00ec704f6136bb93
3
  size 4947390880
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d690066f1df91b18e39b3dccee2363e53f5b61b1a708aa4ba75b13ad93b8ed35
3
  size 3590619888
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22c33530b6338bb62bac33d51487bb78534ced24faf0e041b3868a2925c4b117
3
  size 3590619888
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 3.0485438709270315e+19,
3
+ "train_loss": 0.06383175736532645,
4
+ "train_runtime": 10917.8223,
5
+ "train_samples": 40665,
6
+ "train_samples_per_second": 11.174,
7
+ "train_steps_per_second": 0.931
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff