Amu
/

Text Generation
Transformers
Safetensors
qwen2
conversational
text-generation-inference
Amu commited on
Commit
c93af13
·
verified ·
1 Parent(s): 9979dfe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -52,4 +52,29 @@ It's test model. I hope I can reproduce a rl model like RL-Zero.
52
 
53
  This model is a mini-step.
54
 
55
- Thanks for evveryone in the open community.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  This model is a mini-step.
54
 
55
+ Thanks for evveryone in the open community.
56
+
57
+ how to use:
58
+ ```
59
+ from vllm import LLM, SamplingParams
60
+ from transformers import AutoTokenizer
61
+
62
+ model = LLM(
63
+ "Amu/t1-1.5B"
64
+ )
65
+ tok = AutoTokenizer.from_pretrained("simplescaling/s1-32B")
66
+
67
+ stop_token_ids = tok("<|im_end|>")["input_ids"]
68
+
69
+ sampling_params = SamplingParams(
70
+ max_tokens=32768,
71
+ min_tokens=0,
72
+ stop_token_ids=stop_token_ids,
73
+ )
74
+
75
+ prompt = "How many r in raspberry"
76
+ prompt = "<|im_start|>system\nYou are t1, created by Amu. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n" + prompt + "<|im_end|>\n<|im_start|>assistant\n"
77
+
78
+ o = model.generate(prompt, sampling_params=sampling_params)
79
+ print(o[0].outputs[0].text)
80
+ ```