Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
base_model:
|
3 |
-
- tencent/Hunyuan-
|
4 |
library_name: transformers
|
5 |
---
|
6 |
|
@@ -86,9 +86,9 @@ Note: The following benchmarks are evaluated by TRT-LLM-backend on several **bas
|
|
86 |
|
87 |
|
88 |
### Use with transformers
|
89 |
-
First, please install transformers.
|
90 |
```SHELL
|
91 |
-
pip install
|
92 |
```
|
93 |
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
|
94 |
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
|
@@ -504,4 +504,4 @@ docker run --entrypoint="python3" --gpus all \
|
|
504 |
|
505 |
## Contact Us
|
506 |
|
507 |
-
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|
|
|
1 |
---
|
2 |
base_model:
|
3 |
+
- tencent/Hunyuan-1.8B-Instruct
|
4 |
library_name: transformers
|
5 |
---
|
6 |
|
|
|
86 |
|
87 |
|
88 |
### Use with transformers
|
89 |
+
First, please install transformers.
|
90 |
```SHELL
|
91 |
+
pip install "transformers>=4.56.0"
|
92 |
```
|
93 |
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
|
94 |
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
|
|
|
504 |
|
505 |
## Contact Us
|
506 |
|
507 |
+
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|