Safetensors
qwen2
reasoning
ptrdvn commited on
Commit
dcf5b54
·
verified ·
1 Parent(s): ae3718a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +240 -64
README.md CHANGED
@@ -1,73 +1,249 @@
1
  ---
2
- library_name: transformers
3
- license: other
4
- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  tags:
6
- - llama-factory
7
- - full
8
- - generated_from_trainer
9
- model-index:
10
- - name: reasoning-multilingual-R1-Llama-70B-train
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # reasoning-multilingual-R1-Llama-70B-train
 
 
 
18
 
19
- This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) on the reasoning-multilingual-R1-Llama-70B-train dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.6135
22
 
23
- ## Model description
24
 
25
- More information needed
26
 
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
-
35
- ## Training procedure
36
-
37
- ### Training hyperparameters
38
-
39
- The following hyperparameters were used during training:
40
- - learning_rate: 1e-05
41
- - train_batch_size: 1
42
- - eval_batch_size: 1
43
- - seed: 42
44
- - distributed_type: multi-GPU
45
- - num_devices: 8
46
- - total_train_batch_size: 8
47
- - total_eval_batch_size: 8
48
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
- - lr_scheduler_type: cosine
50
- - lr_scheduler_warmup_ratio: 0.01
51
- - num_epochs: 1.0
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss |
56
- |:-------------:|:------:|:----:|:---------------:|
57
- | 0.9061 | 0.1089 | 11 | 0.7100 |
58
- | 0.6565 | 0.2178 | 22 | 0.6685 |
59
- | 0.601 | 0.3267 | 33 | 0.6480 |
60
- | 0.7668 | 0.4356 | 44 | 0.6343 |
61
- | 0.8058 | 0.5446 | 55 | 0.6248 |
62
- | 0.675 | 0.6535 | 66 | 0.6190 |
63
- | 0.5383 | 0.7624 | 77 | 0.6156 |
64
- | 0.5044 | 0.8713 | 88 | 0.6139 |
65
- | 0.6322 | 0.9802 | 99 | 0.6136 |
66
-
67
-
68
- ### Framework versions
69
-
70
- - Transformers 4.46.1
71
- - Pytorch 2.5.1+cu124
72
- - Datasets 3.1.0
73
- - Tokenizers 0.20.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - am
4
+ - ar
5
+ - bn
6
+ - zh
7
+ - cs
8
+ - nl
9
+ - en
10
+ - fr
11
+ - de
12
+ - el
13
+ - ha
14
+ - he
15
+ - hi
16
+ - id
17
+ - it
18
+ - ja
19
+ - jv
20
+ - km
21
+ - ko
22
+ - lo
23
+ - ms
24
+ - mr
25
+ - fa
26
+ - pl
27
+ - pt
28
+ - ro
29
+ - ru
30
+ - es
31
+ - sw
32
+ - sv
33
+ - tl
34
+ - ta
35
+ - te
36
+ - th
37
+ - tr
38
+ - uk
39
+ - ur
40
+ - vi
41
+ license: apache-2.0
42
+ datasets:
43
+ - lightblue/reasoning-multilingual-R1-Llama-70B-train
44
  tags:
45
+ - reasoning
 
 
 
 
 
46
  ---
47
 
48
+ # lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual
 
49
 
50
+ This is a Deepseek distill finetune trained on multilingual Chain-of-Thought (CoT).
51
+ When this model is prompted in a language, it will both think and respond in that language, unlike the original R1 which will often think in either Chinese or English.
52
+ This will make the outputs of these AIs more understandable and explainable to a wider audience.
53
+ Hopefully this will be useful to the AI community, particularly those developing for languages aside from English and Chinese.
54
 
55
+ This model is a multilingual fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B).
 
 
56
 
57
+ Other fine-tuned versions of this model can be found in [our collection, here](https://huggingface.co/collections/lightblue/r1-multilingual-679c890166ac0a84e83e38fa).
58
 
59
+ This model was trained was trained for ~10 minutes on the 8 x L20 instance ([ecs.gn8is-8x.32xlarge](https://www.alibabacloud.com/help/en/ecs/user-guide/gpu-accelerated-compute-optimized-and-vgpu-accelerated-instance-families-1)) on [Alibaba Cloud](https://www.alibabacloud.com/).
60
 
61
+ # How to use
62
+
63
+ When using these models, we recommend using a sampling temperature of between 0.5-0.7, [as per the original distilled R1 models](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B#usage-recommendations).
64
+
65
+ Additionally, we have observed that the model sometimes tends to repeat for more niche languages, so we also recommend setting `repetition_penalty` to 1.1, or higher if the model repeats itself when processing your prompts.
66
+
67
+ We include scripts to use this model in vLLM:
68
+
69
+ <ul>
70
+ <li><b>vLLM</b>
71
+
72
+ Install [vLLM](https://github.com/vllm-project/vllm/) using `pip install vllm`.
73
+
74
+ <details open>
75
+ <summary>Show vLLM code</summary>
76
+
77
+ ```python
78
+ from vllm import LLM, SamplingParams
79
+
80
+ llm = LLM(
81
+ model="lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual",
82
+ max_model_len=8_000
83
+ )
84
+
85
+ sampling_params = SamplingParams(
86
+ temperature=0.5,
87
+ max_tokens=8_000
88
+ )
89
+
90
+ prompts = [
91
+ """学校には1クラスにつき20人の生徒がおり、クラスは合計3つあります。
92
+ 学校全体では男子と女子がそれぞれ50%ずついます。
93
+ 1つ目のクラスには女子が15人、2つ目のクラスには女子が12人います。
94
+ 3つ目のクラスには何人の男子がいますか?"""
95
+ ]
96
+
97
+ conversations = [
98
+ [{"role": "user", "content": x}] for x in prompts
99
+ ]
100
+
101
+ outputs = llm.chat(conversations, sampling_params=sampling_params)
102
+
103
+ for output in outputs:
104
+ print(output.outputs[0].text)
105
+
106
+ # <think>
107
+ # まず、学校の総生徒数を算出します。各クラスに20人の生徒があり、クラスは3つあるため、総生徒数は60人です。
108
+
109
+ # 次に、学校全体で男子と女子は同じ人数で分布しています。したがって、男子と女子各有30人。
110
+ ...
111
+ # したがって、3つ目のクラスの男子数は20 - 3 = 17人です。
112
+ # </think>
113
+
114
+ # **解答:**
115
+
116
+ # 学校の総生徒数を算出します。
117
+ ...
118
+ # **最終的な答え:**
119
+ # \[
120
+ # \boxed{17}
121
+ # \]
122
+ ```
123
+
124
+ </details></li>
125
+ </ul>
126
+
127
+ # Evaluation
128
+
129
+ Through some quick evaluation of our own, we found this model can produce much correctly formatted and accurate results for higher resource languages, such as Japanese, English, German, than lower resource languages, such as Amharic or Lao.
130
+
131
+ We did a **very** quick evaluation of 5 questions with each dataset (written by me and translated by GPT4o Mini) on the [lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) model, and we find that the model is able to fairly reliably output the correct answers and in the correct language for a large variety of languages:
132
+
133
+ For this evaluation, a score of >=0.8 is good, as one of the questions was very hard. The language detection was done using [pycld2](https://pypi.org/project/pycld2/) so errors may occur with the correct language being mistaken for another one.
134
+
135
+ | language | Has a correct think statement | Has the think statement in the correct language | Is the response in the correct language | Is the answer correct |
136
+ |:----------------|------------:|------------------------:|----------------------:|-------------:|
137
+ | Amharic | 0.2 | 0 | 0 | 0 |
138
+ | Arabic | 1 | 0.8 | 0.8 | 0.6 |
139
+ | Bengali | 1 | 1 | 1 | 0.2 |
140
+ | Chinese | 1 | 1 | 1 | 0.8 |
141
+ | Czech | 1 | 1 | 1 | 0.8 |
142
+ | Dutch | 1 | 1 | 1 | 0.8 |
143
+ | English | 1 | 1 | 1 | 0.8 |
144
+ | French | 1 | 1 | 1 | 0.8 |
145
+ | German | 1 | 1 | 1 | 0.8 |
146
+ | Greek | 1 | 1 | 1 | 0.6 |
147
+ | Hausa | 0.4 | 0 | 0 | 0 |
148
+ | Hebrew | 1 | 0.8 | 1 | 0.6 |
149
+ | Hindi | 1 | 1 | 1 | 0.8 |
150
+ | Indonesian | 1 | 1 | 1 | 0.8 |
151
+ | Italian | 1 | 1 | 1 | 0.8 |
152
+ | Japanese | 1 | 1 | 0.8 | 0.6 |
153
+ | Javanese | 0.8 | 0.2 | 0.2 | 0.6 |
154
+ | Khmer | 0.6 | 0.6 | 0.6 | 0 |
155
+ | Korean | 1 | 1 | 1 | 1 |
156
+ | Lao | 0.4 | 0.4 | 0.4 | 0 |
157
+ | Malay | 1 | 0.4 | 0.4 | 0.8 |
158
+ | Marathi | 0.6 | 0.4 | 0.6 | 0.2 |
159
+ | Persian (Farsi) | 0.6 | None* | None* | 0.2 |
160
+ | Polish | 1 | 1 | 1 | 0.6 |
161
+ | Portuguese | 1 | 1 | 1 | 0.8 |
162
+ | Romanian | 1 | 1 | 1 | 0.8 |
163
+ | Russian | 1 | 1 | 1 | 0.8 |
164
+ | Spanish | 1 | 1 | 1 | 0.8 |
165
+ | Swahili | 0.4 | 0.4 | 0.4 | 0 |
166
+ | Swedish | 1 | 1 | 1 | 0.8 |
167
+ | Tagalog | 1 | 1 | 1 | 0.8 |
168
+ | Tamil | 0.8 | 0.8 | 0.8 | 0.2 |
169
+ | Telugu | 0.8 | 0.6 | 0.8 | 0 |
170
+ | Thai | 1 | 1 | 1 | 0.8 |
171
+ | Turkish | 1 | 1 | 1 | 0.8 |
172
+ | Ukrainian | 1 | 1 | 1 | 0.8 |
173
+ | Urdu | 1 | 1 | 1 | 0.6 |
174
+ | Vietnamese | 1 | 1 | 1 | 1 |
175
+
176
+ * There was an error with Farsi detection (my own fault) so we do not report Farsi scores.
177
+
178
+ The evaluation code for this can be found [here](https://drive.google.com/file/d/1P33GpqvKmHoZUsWqqBPXHTToN2W7MDRG/view?usp=sharing).
179
+
180
+ # Training code
181
+
182
+ ```yaml
183
+ ### model
184
+ model_name_or_path: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
185
+
186
+ ### method
187
+ stage: sft
188
+ do_train: true
189
+ finetuning_type: full
190
+ deepspeed: /root/LLaMA-Factory/examples/deepspeed/ds_z2_config.json
191
+
192
+ ### dataset
193
+ dataset: reasoning-multilingual-R1-Llama-70B-train
194
+ template: qwen
195
+ cutoff_len: 4500
196
+ overwrite_cache: true
197
+ preprocessing_num_workers: 16
198
+ packing: true
199
+
200
+ ### output
201
+ output_dir: /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/reasoning-multilingual-R1-Llama-70B-train
202
+ logging_steps: 1
203
+ save_steps: 0.99999
204
+ plot_loss: true
205
+ overwrite_output_dir: true
206
+
207
+ ### train
208
+ per_device_train_batch_size: 1
209
+ gradient_accumulation_steps: 1
210
+ learning_rate: 1.0e-5
211
+ num_train_epochs: 1.0
212
+ lr_scheduler_type: cosine
213
+ warmup_ratio: 0.01
214
+ bf16: true
215
+ ddp_timeout: 180000000
216
+
217
+ ### eval
218
+ val_size: 0.01
219
+ per_device_eval_batch_size: 1
220
+ eval_strategy: steps
221
+ eval_steps: 0.1
222
+ ```
223
+
224
+ ```bash
225
+ echo '{
226
+ "reasoning-multilingual-R1-Llama-70B-train": {
227
+ "hf_hub_url": "lightblue/reasoning-multilingual-R1-Llama-70B-train",
228
+ "formatting": "sharegpt"
229
+ }
230
+ }' > /root/LLaMA-Factory/data/dataset_info.json
231
+
232
+ # 7B Qwen
233
+ cd /root/LLaMA-Factory && llamafactory-cli train /root/reasoning_multilingual_train_7B.yaml
234
+ rm -r /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/reasoning-multilingual-R1-Llama-70B-train/checkpoint*
235
+ huggingface-cli upload lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/reasoning-multilingual-R1-Llama-70B-train
236
+
237
+ ```
238
+
239
+ # License
240
+
241
+ We share this model with the Apache 2.0 license.
242
+
243
+ # Developed by
244
+
245
+ <a href="https://www.lightblue-tech.com">
246
+ <img src="https://www.lightblue-tech.com/wp-content/uploads/2023/08/color_%E6%A8%AA%E5%9E%8B-1536x469.png" alt="Lightblue technology logo" width="400"/>
247
+ </a>
248
+
249
+ This model was trained by Peter Devine ([ptrdvn](https://huggingface.co/ptrdvn)) for Lightblue