POWERHACK lbourdois commited on
Commit
d58b2a3
·
verified ·
1 Parent(s): 9e6c47d

Improve language tag (#1)

Browse files

- Improve language tag (7a0ac207504a94e0485afc7807a9528e828c764b)


Co-authored-by: Loïck BOURDOIS <[email protected]>

Files changed (1) hide show
  1. README.md +59 -45
README.md CHANGED
@@ -1,46 +1,60 @@
1
- ---
2
- tags:
3
- - autotrain
4
- - text-generation-inference
5
- - text-generation
6
- - peft
7
- library_name: transformers
8
- base_model: Qwen/Qwen2.5-3B
9
- widget:
10
- - messages:
11
- - role: user
12
- content: What is your favorite condiment?
13
- license: other
14
- ---
15
-
16
- # Model Trained Using AutoTrain
17
-
18
- This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
19
-
20
- # Usage
21
-
22
- ```python
23
-
24
- from transformers import AutoModelForCausalLM, AutoTokenizer
25
-
26
- model_path = "PATH_TO_THIS_REPO"
27
-
28
- tokenizer = AutoTokenizer.from_pretrained(model_path)
29
- model = AutoModelForCausalLM.from_pretrained(
30
- model_path,
31
- device_map="auto",
32
- torch_dtype='auto'
33
- ).eval()
34
-
35
- # Prompt content: "hi"
36
- messages = [
37
- {"role": "user", "content": "hi"}
38
- ]
39
-
40
- input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
41
- output_ids = model.generate(input_ids.to('cuda'))
42
- response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
43
-
44
- # Model response: "Hello! How can I assist you today?"
45
- print(response)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ```
 
1
+ ---
2
+ tags:
3
+ - autotrain
4
+ - text-generation-inference
5
+ - text-generation
6
+ - peft
7
+ library_name: transformers
8
+ base_model: Qwen/Qwen2.5-3B
9
+ widget:
10
+ - messages:
11
+ - role: user
12
+ content: What is your favorite condiment?
13
+ license: other
14
+ language:
15
+ - zho
16
+ - eng
17
+ - fra
18
+ - spa
19
+ - por
20
+ - deu
21
+ - ita
22
+ - rus
23
+ - jpn
24
+ - kor
25
+ - vie
26
+ - tha
27
+ - ara
28
+ ---
29
+
30
+ # Model Trained Using AutoTrain
31
+
32
+ This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
33
+
34
+ # Usage
35
+
36
+ ```python
37
+
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer
39
+
40
+ model_path = "PATH_TO_THIS_REPO"
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
43
+ model = AutoModelForCausalLM.from_pretrained(
44
+ model_path,
45
+ device_map="auto",
46
+ torch_dtype='auto'
47
+ ).eval()
48
+
49
+ # Prompt content: "hi"
50
+ messages = [
51
+ {"role": "user", "content": "hi"}
52
+ ]
53
+
54
+ input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
55
+ output_ids = model.generate(input_ids.to('cuda'))
56
+ response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
57
+
58
+ # Model response: "Hello! How can I assist you today?"
59
+ print(response)
60
  ```