PhilSad commited on
Commit
d041690
·
verified ·
1 Parent(s): dd428ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -5
README.md CHANGED
@@ -19,12 +19,25 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
19
  ## Quick start
20
 
21
  ```python
22
- from transformers import pipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
25
- generator = pipeline("text-generation", model="PhilSad/SmolLM2-135M-FT-SCP-Wiki", device="cuda")
26
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
27
- print(output["generated_text"])
28
  ```
29
 
30
  ## Training procedure
 
19
  ## Quick start
20
 
21
  ```python
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer
23
+
24
+ model_name = "PhilSad/SmolLM2-1.7B-FT-SCP-Wiki"
25
+ model = AutoModelForCausalLM.from_pretrained(
26
+ pretrained_model_name_or_path=model_name
27
+ ).to(device)
28
+ tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)
29
+
30
+ prompt = "SCP-10214 is a god who loves making pasta."
31
+
32
+ messages = [{"role": "user", "content": prompt}]
33
+ formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False)
34
+
35
+ inputs = tokenizer(formatted_prompt, return_tensors="pt").to(device)
36
+
37
+ outputs = model.generate(**inputs, max_new_tokens=2048)
38
+
39
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
40
 
 
 
 
 
41
  ```
42
 
43
  ## Training procedure