lbourdois commited on
Commit
6abad02
verified
1 Parent(s): 00f5fe3

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +40 -28
README.md CHANGED
@@ -1,29 +1,41 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- datasets:
5
- - llm-jp/oasst2-33k-ja
6
- language:
7
- - ja
8
- base_model:
9
- - Qwen/Qwen2.5-7B
10
- inference: false
11
- ---
12
-
13
- # Take-7B
14
-
15
- ## Description
16
- Take-7B is a model that was instruction-tuned on the oasst2, using Qwen2.5-7B as its base model.
17
-
18
- ## Series
19
- | Variant | Link |
20
- | --- | --- |
21
- | Malum-230 | [Manual-Dataset-Creation-Project/Malum-230](https://huggingface.co/datasets/Manual-Dataset-Creation-Project/Malum-230) |
22
- | Matsu-7B | [Manual-Dataset-Creation-Project/Matsu-7B](https://huggingface.co/Manual-Dataset-Creation-Project/Matsu-7B) |
23
-
24
- ## Contributors
25
- - [Sudy](https://huggingface.co/sudy-super)
26
- - [銇汇兗銈娿兗銇点亯銇c亸銇橾(https://huggingface.co/Holy-fox)
27
-
28
- ## Acknowledgments
 
 
 
 
 
 
 
 
 
 
 
 
29
  We would like to express our gratitude to [VOLTMIND](https://voltmind.jp/) for providing the computational resources used to train this model.
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - llm-jp/oasst2-33k-ja
6
+ language:
7
+ - zho
8
+ - eng
9
+ - fra
10
+ - spa
11
+ - por
12
+ - deu
13
+ - ita
14
+ - rus
15
+ - jpn
16
+ - kor
17
+ - vie
18
+ - tha
19
+ - ara
20
+ base_model:
21
+ - Qwen/Qwen2.5-7B
22
+ inference: false
23
+ ---
24
+
25
+ # Take-7B
26
+
27
+ ## Description
28
+ Take-7B is a model that was instruction-tuned on the oasst2, using Qwen2.5-7B as its base model.
29
+
30
+ ## Series
31
+ | Variant | Link |
32
+ | --- | --- |
33
+ | Malum-230 | [Manual-Dataset-Creation-Project/Malum-230](https://huggingface.co/datasets/Manual-Dataset-Creation-Project/Malum-230) |
34
+ | Matsu-7B | [Manual-Dataset-Creation-Project/Matsu-7B](https://huggingface.co/Manual-Dataset-Creation-Project/Matsu-7B) |
35
+
36
+ ## Contributors
37
+ - [Sudy](https://huggingface.co/sudy-super)
38
+ - [銇汇兗銈娿兗銇点亯銇c亸銇橾(https://huggingface.co/Holy-fox)
39
+
40
+ ## Acknowledgments
41
  We would like to express our gratitude to [VOLTMIND](https://voltmind.jp/) for providing the computational resources used to train this model.