zhuguoku commited on
Commit
5209af9
·
verified ·
1 Parent(s): cf9dffe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-32B-Instruct
4
+ ---
5
+
6
+ ## Model Overview
7
+
8
+ This repository, `ModelFuture-Distill-Qwen-32B-SFT-v1`, is designed for testing purposes. We directly apply Supervised Fine-Tuning (SFT) to the base model.
9
+
10
+ ## Intended Use
11
+
12
+ This model is primarily intended for testing and validation purposes. It can be used to:
13
+ - Evaluate the performance of the distilled model on various tasks.
14
+ - Test the functionality and robustness of the model in different environments.
15
+ - Provide a baseline for further development and optimization.
16
+
17
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
18
+
19
+ ```python
20
+ from transformers import AutoModelForCausalLM, AutoTokenizer
21
+
22
+ model_name = "zhuguoku/ModelFuture-Distill-Qwen-32B-SFT-v1"
23
+
24
+ model = AutoModelForCausalLM.from_pretrained(
25
+ model_name,
26
+ torch_dtype="auto",
27
+ device_map="auto"
28
+ )
29
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
30
+
31
+ prompt = "我想锻炼身体,给我提供一些建议。"
32
+ messages = [
33
+ {"role": "system", "content": "You are a helpful assistant."},
34
+ {"role": "user", "content": prompt}
35
+ ]
36
+ text = tokenizer.apply_chat_template(
37
+ messages,
38
+ tokenize=False,
39
+ add_generation_prompt=True
40
+ )
41
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
42
+
43
+ generated_ids = model.generate(
44
+ **model_inputs,
45
+ max_new_tokens=512
46
+ )
47
+ generated_ids = [
48
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
49
+ ]
50
+
51
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
52
+ ```