yuhaozhangz commited on
Commit
4f9e989
·
verified ·
1 Parent(s): 67b7622

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -10
README.md CHANGED
@@ -14,22 +14,22 @@ library_name: transformers
14
  </p><p></p>
15
 
16
 
17
- <p align="center">
18
- 🤗&nbsp;<a href="https://huggingface.co/tencent/"><b>Hugging Face</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
19
- <img src="https://avatars.githubusercontent.com/u/109945100?s=200&v=4" width="16"/>&nbsp;<a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct"><b>ModelScope</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
20
- <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/6594d0c6c5f1cd69a48b261d/04ZNQlAfs08Bfg4B1o3XO.png" width="14"/>&nbsp;<a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
21
- </p>
22
 
23
  <p align="center">
24
- 🖥️&nbsp;<a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
 
25
  🕖&nbsp;<a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
26
- 🕹️&nbsp;<a href="https://hunyuan.tencent.com/"><b>Demo</b></a>&nbsp;&nbsp;&nbsp;&nbsp;
 
27
  </p>
28
 
 
29
  <p align="center">
30
  <a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B"><b>GITHUB</b></a> |
31
  <a href="https://cnb.cool/tencent/hunyuan/Hunyuan-7B"><b>cnb.cool</b></a> |
32
- <a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a>
 
 
33
  </p>
34
 
35
 
@@ -47,10 +47,9 @@ We have released a series of Hunyuan dense models, comprising both pre-trained a
47
  - **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
48
 
49
  ## Related News
50
- * 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-4B-Pretrain** , **Hunyuan-7B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Instruct** on Hugging Face.
51
  <br>
52
 
53
-
54
  ## Benchmark
55
 
56
  Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
 
14
  </p><p></p>
15
 
16
 
 
 
 
 
 
17
 
18
  <p align="center">
19
+ 🤗&nbsp;<a href="https://huggingface.co/tencent/Hunyuan-7B-Instruct-FP8"><b>Hugging Face</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
20
+ 🖥️&nbsp;<a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
21
  🕖&nbsp;<a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
22
+ 🕹️&nbsp;<a href="https://hunyuan.tencent.com/"><b>Demo</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
23
+ 🤖&nbsp;<a href="https://www.modelscope.cn/models/Tencent-Hunyuan/Hunyuan-7B-Instruct-FP8"><b>ModelScope</b></a>
24
  </p>
25
 
26
+
27
  <p align="center">
28
  <a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B"><b>GITHUB</b></a> |
29
  <a href="https://cnb.cool/tencent/hunyuan/Hunyuan-7B"><b>cnb.cool</b></a> |
30
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE.txt"><b>LICENSE</b></a> |
31
+ <a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
32
+ <a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
33
  </p>
34
 
35
 
 
47
  - **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
48
 
49
  ## Related News
50
+ * 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
51
  <br>
52
 
 
53
  ## Benchmark
54
 
55
  Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.