writinwaters commited on
Commit
ad3054c
·
1 Parent(s): 8dff6d9

fixed a format issue for docusaurus publication (#871)

Browse files

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Files changed (1) hide show
  1. docs/guides/deploy_local_llm.md +1 -1
docs/guides/deploy_local_llm.md CHANGED
@@ -56,7 +56,7 @@ $ xinference-local --host 0.0.0.0 --port 9997
56
  ### Launch Xinference
57
 
58
  Decide which LLM you want to deploy ([here's a list for supported LLM](https://inference.readthedocs.io/en/latest/models/builtin/)), say, **mistral**.
59
- Execute the following command to launch the model, remember to replace ${quantization} with your chosen quantization method from the options listed above:
60
  ```bash
61
  $ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
62
  ```
 
56
  ### Launch Xinference
57
 
58
  Decide which LLM you want to deploy ([here's a list for supported LLM](https://inference.readthedocs.io/en/latest/models/builtin/)), say, **mistral**.
59
+ Execute the following command to launch the model, remember to replace `${quantization}` with your chosen quantization method from the options listed above:
60
  ```bash
61
  $ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
62
  ```