manaestras commited on
Commit
ad41d1e
·
verified ·
1 Parent(s): cf30896

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  base_model:
3
- - tencent/Hunyuan-4B-Pretrain
4
  library_name: transformers
5
  ---
6
 
@@ -86,9 +86,9 @@ Note: The following benchmarks are evaluated by TRT-LLM-backend on several **bas
86
   
87
 
88
  ### Use with transformers
89
- First, please install transformers. We will merge it into the main branch later.
90
  ```SHELL
91
- pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
92
  ```
93
  Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
94
  1. Pass **"enable_thinking=False"** when calling apply_chat_template.
@@ -504,4 +504,4 @@ docker run --entrypoint="python3" --gpus all \
504
 
505
  ## Contact Us
506
 
507
- If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
 
1
  ---
2
  base_model:
3
+ - tencent/Hunyuan-1.8B-Instruct
4
  library_name: transformers
5
  ---
6
 
 
86
   
87
 
88
  ### Use with transformers
89
+ First, please install transformers.
90
  ```SHELL
91
+ pip install "transformers>=4.56.0"
92
  ```
93
  Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
94
  1. Pass **"enable_thinking=False"** when calling apply_chat_template.
 
504
 
505
  ## Contact Us
506
 
507
+ If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).