Text Generation
Transformers
Safetensors
llama
text-generation-inference
mfromm tuelwer commited on
Commit
6d4a505
·
verified ·
1 Parent(s): 9ad497c

Fix print statement in example snippet (#8)

Browse files

- Fix print statement in example snippet (3deb624c9ac012e2ed19a9ab7e06cde1d4d33271)


Co-authored-by: Tobias Uelwer <[email protected]>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -149,7 +149,7 @@ completion = client.chat.completions.create(
149
  messages=[{"role": "User", "content": "Hallo"}],
150
  extra_body={"chat_template":"DE"}
151
  )
152
- print(f"Assistant: {completion]")
153
  ```
154
  The default language of the Chat-Template can also be set when starting the vLLM Server. For this create a new file with the name `lang` and the content `DE` and start the vLLM Server as follows:
155
  ``` shell
 
149
  messages=[{"role": "User", "content": "Hallo"}],
150
  extra_body={"chat_template":"DE"}
151
  )
152
+ print(f"Assistant: {completion}")
153
  ```
154
  The default language of the Chat-Template can also be set when starting the vLLM Server. For this create a new file with the name `lang` and the content `DE` and start the vLLM Server as follows:
155
  ``` shell