Update README.md
Browse files
README.md
CHANGED
@@ -92,37 +92,33 @@ pipeline = transformers.pipeline(
|
|
92 |
### Run
|
93 |
|
94 |
```python
|
95 |
-
question = "The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
messages = [
|
98 |
{"role": "system", "content": "You are a helpful assistant."},
|
99 |
{"role": "user", "content": question},
|
100 |
]
|
101 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
102 |
-
print("***Prompt:\n", prompt)
|
103 |
|
104 |
outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
105 |
-
print("***Generation
|
106 |
-
|
107 |
|
108 |
-
### Output
|
109 |
-
|
110 |
-
```
|
111 |
-
***Prompt:
|
112 |
-
<|im_start|>system
|
113 |
-
You are a helpful assistant.<|im_end|>
|
114 |
-
<|im_start|>user
|
115 |
-
The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning. They sold 93 loaves in the morning and 39 loaves in the afternoon. A grocery store then returned 6 unsold loaves back to the bakery. How many loaves of bread did the bakery have left?
|
116 |
-
Respond as succinctly as possible. Format the response as a completion of this table.
|
117 |
-
|step|subquestion|procedure|result|
|
118 |
-
|:---|:----------|:--------|:-----:|.<|im_end|>
|
119 |
-
<|im_start|>assistant
|
120 |
```
|
121 |
|
122 |
```
|
123 |
***Generation:
|
124 |
-
|1|
|
125 |
-
|2|
|
126 |
-
|3|
|
|
|
127 |
```
|
128 |
|
|
|
92 |
### Run
|
93 |
|
94 |
```python
|
95 |
+
question = """The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning.
|
96 |
+
They sold 93 loaves in the morning and 39 loaves in the afternoon.
|
97 |
+
A grocery store then returned 6 unsold loaves back to the bakery.
|
98 |
+
How many loaves of bread did the bakery have left?
|
99 |
+
Respond as succinctly as possible. Format the response as a completion of this table:
|
100 |
+
|step|subquestion|procedure|result|
|
101 |
+
|:---|:----------|:--------|:-----:|"""
|
102 |
+
|
103 |
|
104 |
messages = [
|
105 |
{"role": "system", "content": "You are a helpful assistant."},
|
106 |
{"role": "user", "content": question},
|
107 |
]
|
108 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
109 |
+
# print("***Prompt:\n", prompt)
|
110 |
|
111 |
outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
112 |
+
print("***Generation:")
|
113 |
+
print(outputs[0]["generated_text"][len(prompt):])
|
114 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
115 |
```
|
116 |
|
117 |
```
|
118 |
***Generation:
|
119 |
+
|1|Initial loaves|Start with total loaves|200|
|
120 |
+
|2|Sold in morning|Subtract morning sales|200 - 93 = 107|
|
121 |
+
|3|Sold in afternoon|Subtract afternoon sales|107 - 39 = 68|
|
122 |
+
|4|Returned loaves|Add returned loaves|68 + 6 = 74|
|
123 |
```
|
124 |
|