GoZion commited on
Commit
e1b07b9
Β·
verified Β·
1 Parent(s): a938f24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -21,12 +21,12 @@ tags:
21
  <p align="center">
22
  πŸ€– <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>
23
  πŸ€— <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
24
- πŸ–₯️ <a href="https://github.com/inclusionAI/Ling">GitHub</a>
25
  <p>
26
 
27
  ## Introduction
28
 
29
- Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8 billion parameters with 2.75 billion activated parameters. Ling-Coder-Lite performs impressively on coding tasks compared to existing models in the industry. Specifically, Ling-Coder-Lite further pre-training from an intermediate checkpoint of Ling-Lite, incorporating an additional 3 trillion tokens. This extended pre-training significantly boosts the coding abilities of Ling-Lite, while preserving its strong performance in general language tasks. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
30
 
31
  ## Model Downloads
32
 
@@ -40,6 +40,18 @@ You can download the following table to see the various parameters for your use
40
  | Ling-Coder-lite | 16.8B | 2.75B | 16K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite) |
41
  </div>
42
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  ## Evaluation
44
 
45
  Detailed evaluation results are reported in our technical report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
@@ -89,7 +101,7 @@ print(response)
89
  ```
90
 
91
  ## Deployment
92
- Please refer to [Github](https://github.com/inclusionAI/Ling/blob/master/README.md)
93
 
94
  ## License
95
  This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-Coder-lite/blob/main/LICENCE).
@@ -99,7 +111,7 @@ This code repository is licensed under [the MIT License](https://huggingface.co/
99
  ```
100
  @misc{codefuse2025samplemattersleveragingmixtureofexperts,
101
  title={Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM},
102
- author={Codefuse and Ling Team and : and Wenting Cai and Yuchen Cao and Chaoyu Chen and Chen Chen and Siba Chen and Qing Cui and Peng Di and Junpeng Fang and Zi Gong and Ting Guo and Zhengyu He and Yang Huang and Cong Li and Jianguo Li and Zheng Li and Shijie Lian and BingChang Liu and Songshan Luo and Shuo Mao and Min Shen and Jian Wu and Jiaolong Yang and Wenjie Yang and Tong Ye and Hang Yu and Wei Zhang and Zhenduo Zhang and Hailin Zhao and Xunjin Zheng and Jun Zhou},
103
  year={2025},
104
  eprint={2503.17793},
105
  archivePrefix={arXiv},
 
21
  <p align="center">
22
  πŸ€– <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>
23
  πŸ€— <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
24
+ πŸ–₯️ <a href="https://github.com/codefuse-ai/Ling-Coder-Lite">GitHub</a>
25
  <p>
26
 
27
  ## Introduction
28
 
29
+ Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. This model demonstrates state-of-the-art performance on 12 coding benchmarks, while simultaneously offering competitive latency and throughput compared to code LLMs of similar size. In addition to open-sourcing the model itself, we also release a substantial amount of code-related data, including synthetic QA, SFT and DPO datasets. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
30
 
31
  ## Model Downloads
32
 
 
40
  | Ling-Coder-lite | 16.8B | 2.75B | 16K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite) |
41
  </div>
42
 
43
+ ## Dataset Downloads
44
+
45
+ <div align="center">
46
+
47
+ | **Model** | **Samples** | **Download** |
48
+ | :------------: | :----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------: |
49
+ | Ling-Coder-SyntheticQA | 24M | [πŸ€— HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-SyntheticQA) |
50
+ | Ling-Coder-SFT | 5M | [πŸ€— HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-SFT) |
51
+ | Ling-Coder-DPO | 250K | [πŸ€— HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-DPO) |
52
+
53
+ </div>
54
+
55
  ## Evaluation
56
 
57
  Detailed evaluation results are reported in our technical report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
 
101
  ```
102
 
103
  ## Deployment
104
+ Please refer to [Github](https://github.com/codefuse-ai/Ling-Coder-Lite/blob/master/README.md)
105
 
106
  ## License
107
  This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-Coder-lite/blob/main/LICENCE).
 
111
  ```
112
  @misc{codefuse2025samplemattersleveragingmixtureofexperts,
113
  title={Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM},
114
+ author={Codefuse and Ling Team},
115
  year={2025},
116
  eprint={2503.17793},
117
  archivePrefix={arXiv},