dfurman commited on
Commit
592b1a9
·
verified ·
1 Parent(s): 566d3ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -41,7 +41,7 @@ You can find the experiment on W&B at [this address](https://wandb.ai/dryanfurma
41
 
42
  <details>
43
 
44
- <summary>Setup</summary>
45
 
46
  ```python
47
  !pip install -qU transformers accelerate bitsandbytes
@@ -82,10 +82,13 @@ pipeline = transformers.pipeline(
82
 
83
  </details>
84
 
85
- Run
86
 
87
  ```python
88
- messages = [{"role": "user", "content": "What is a large language model?"}]
 
 
 
89
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
90
  print("***Prompt:\n", prompt)
91
 
@@ -93,6 +96,6 @@ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7,
93
  print("***Generation:\n", outputs[0]["generated_text"])
94
  ```
95
 
96
- Output
97
 
98
  coming
 
41
 
42
  <details>
43
 
44
+ ### <summary>Setup</summary>
45
 
46
  ```python
47
  !pip install -qU transformers accelerate bitsandbytes
 
82
 
83
  </details>
84
 
85
+ ### Run
86
 
87
  ```python
88
+ messages = [
89
+ {"role": "system", "content": "You are a helpful assistant."},
90
+ {"role": "user", "content": "What is a large language model?"},
91
+ ]
92
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
93
  print("***Prompt:\n", prompt)
94
 
 
96
  print("***Generation:\n", outputs[0]["generated_text"])
97
  ```
98
 
99
+ ### Output
100
 
101
  coming