Safetensors
qwen2
linqq9 commited on
Commit
a6c07df
·
verified ·
1 Parent(s): 20043b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -45,6 +45,12 @@ Hammer models offer flexibility in deployment and usage, fully supporting both *
45
 
46
  ### Using vLLM
47
  #### Option 1: Using Hammer client
 
 
 
 
 
 
48
  vLLM offers efficient serving with lower latency. To serve the model with vLLM:
49
  ```
50
  vllm serve MadeAgents/Hammer2.1-1.5b --host 0.0.0.0 --port 8000 --tensor-parallel-size 1
 
45
 
46
  ### Using vLLM
47
  #### Option 1: Using Hammer client
48
+ Before using vLLM, first clone the Hammer code repository and change directory to the 'Hammer':
49
+ ```
50
+ git clone https://github.com/MadeAgents/Hammer.git
51
+ cd Hammer
52
+ ```
53
+
54
  vLLM offers efficient serving with lower latency. To serve the model with vLLM:
55
  ```
56
  vllm serve MadeAgents/Hammer2.1-1.5b --host 0.0.0.0 --port 8000 --tensor-parallel-size 1