Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ tags:
|
|
14 |
- tool-calling
|
15 |
- tool-planning
|
16 |
---
|
17 |
-
# Vikhrmodels/Qwen2.5-7B-Instruct-Tool-Planning
|
18 |
|
19 |
<!-- Provide a quick summary of what the model is/does. -->
|
20 |
|
@@ -92,12 +92,12 @@ Next, you need to download the model and tokenizer:
|
|
92 |
import torch
|
93 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
94 |
model = AutoModelForCausalLM.from_pretrained(
|
95 |
-
"Vikhrmodels/Qwen2.5-7B-Instruct-Tool-Planning",
|
96 |
device_map="auto",
|
97 |
torch_dtype=torch.bfloat16, # recommended dtype
|
98 |
)
|
99 |
tokenizer = AutoTokenizer.from_pretrained(
|
100 |
-
"Vikhrmodels/Qwen2.5-7B-Instruct-Tool-Planning",
|
101 |
)
|
102 |
```
|
103 |
|
@@ -146,7 +146,7 @@ Then our `generated_response` will look like this:
|
|
146 |
For corrected work online serving in VLLM you need additionally load [qwen2_tool_parser.py]() and [chat_template.jinja]() from this repository.
|
147 |
|
148 |
```
|
149 |
-
vllm serve Vikhrmodels/Qwen2.5-7B-Instruct-Tool-Planning \
|
150 |
--download-dir "/path/to/cache" \
|
151 |
--chat-template "/path/to/chat_template.jinja" \
|
152 |
--tool-parser-plugin "/path/to/qwen2_tool_parser.py" \
|
@@ -726,8 +726,14 @@ This model just 7B version of Qwen2.5 and some hallucinations are possible. If y
|
|
726 |
|
727 |
## Evaluation <a name="eval"></a>
|
728 |
|
729 |
-
|
730 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
731 |
|
732 |
## Authors
|
733 |
|
|
|
14 |
- tool-calling
|
15 |
- tool-planning
|
16 |
---
|
17 |
+
# Vikhrmodels/Qwen2.5-7B-Instruct-Tool-Planning-v0.1
|
18 |
|
19 |
<!-- Provide a quick summary of what the model is/does. -->
|
20 |
|
|
|
92 |
import torch
|
93 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
94 |
model = AutoModelForCausalLM.from_pretrained(
|
95 |
+
"Vikhrmodels/Qwen2.5-7B-Instruct-Tool-Planning-v0.1",
|
96 |
device_map="auto",
|
97 |
torch_dtype=torch.bfloat16, # recommended dtype
|
98 |
)
|
99 |
tokenizer = AutoTokenizer.from_pretrained(
|
100 |
+
"Vikhrmodels/Qwen2.5-7B-Instruct-Tool-Planning-v0.1",
|
101 |
)
|
102 |
```
|
103 |
|
|
|
146 |
For corrected work online serving in VLLM you need additionally load [qwen2_tool_parser.py]() and [chat_template.jinja]() from this repository.
|
147 |
|
148 |
```
|
149 |
+
vllm serve Vikhrmodels/Qwen2.5-7B-Instruct-Tool-Planning-v0.1 \
|
150 |
--download-dir "/path/to/cache" \
|
151 |
--chat-template "/path/to/chat_template.jinja" \
|
152 |
--tool-parser-plugin "/path/to/qwen2_tool_parser.py" \
|
|
|
726 |
|
727 |
## Evaluation <a name="eval"></a>
|
728 |
|
729 |
+
| Task | Acc(%) |
|
730 |
+
|---------------------------|----------|
|
731 |
+
| Simple | 73.25 |
|
732 |
+
| Multiple | 93.00 |
|
733 |
+
| Parallel | 90.00 |
|
734 |
+
| Parallel Multiple | 81.00 |
|
735 |
+
| Relevance Detection | 64.71 |
|
736 |
+
| Irrelevance Detection | 85.72 |
|
737 |
|
738 |
## Authors
|
739 |
|