Muennighoff commited on
Commit
0221ed8
·
verified ·
1 Parent(s): 4532948

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -34
README.md CHANGED
@@ -1,58 +1,35 @@
1
  ---
2
  base_model: Qwen/Qwen2.5-32B-Instruct
3
  library_name: transformers
4
- model_name: Qwen2.5-32B-Instruct-20250104_095632
5
  tags:
6
  - generated_from_trainer
7
  - trl
8
  - sft
9
- licence: license
10
  ---
11
 
12
- # Model Card for Qwen2.5-32B-Instruct-20250104_095632
13
 
14
- This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
15
- It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
- ## Quick start
18
 
19
- ```python
20
- from transformers import pipeline
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="qfq/Qwen2.5-32B-Instruct-20250104_095632", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
26
- ```
27
-
28
- ## Training procedure
29
 
30
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hashimoto-group/o1/runs/rkflww8e)
31
 
32
-
33
- This model was trained with SFT.
34
-
35
- ### Framework versions
36
-
37
  - TRL: 0.13.0
38
- - Transformers: 4.47.1
39
  - Pytorch: 2.3.1
40
  - Datasets: 3.0.1
41
  - Tokenizers: 0.21.0
42
 
43
- ## Citations
44
-
45
-
46
 
47
- Cite TRL as:
48
-
49
  ```bibtex
50
- @misc{vonwerra2022trl,
51
- title = {{TRL: Transformer Reinforcement Learning}},
52
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
53
- year = 2020,
54
- journal = {GitHub repository},
55
- publisher = {GitHub},
56
- howpublished = {\url{https://github.com/huggingface/trl}}
57
- }
58
  ```
 
1
  ---
2
  base_model: Qwen/Qwen2.5-32B-Instruct
3
  library_name: transformers
4
+ model_name: step-conditional-control
5
  tags:
6
  - generated_from_trainer
7
  - trl
8
  - sft
9
+ license: apache-2.0
10
  ---
11
 
12
+ # Model Summary
13
 
14
+ - **Repository:** [simplescaling/s1](https://github.com/simplescaling/s1)
15
+ - **Paper:** TODO
16
 
17
+ # Use
18
 
19
+ This is an older step-conditional control model for our paper used only for the Discussion setion. You can evaluate using the information [here](https://github.com/simplescaling/s1?tab=readme-ov-file#evaluation).
 
20
 
21
+ # Training information
 
 
 
 
 
 
22
 
23
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hashimoto-group/o1/runs/rkflww8e)
24
 
 
 
 
 
 
25
  - TRL: 0.13.0
26
+ - Transformers: 4.48.0
27
  - Pytorch: 2.3.1
28
  - Datasets: 3.0.1
29
  - Tokenizers: 0.21.0
30
 
31
+ # Citation
 
 
32
 
 
 
33
  ```bibtex
34
+ TODO
 
 
 
 
 
 
 
35
  ```