Text Generation
Transformers
Safetensors
English
Chinese
bailing_moe
code
Mixture of Experts
conversational
custom_code
GoZion commited on
Commit
2aefd27
Β·
verified Β·
1 Parent(s): eefc744

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -4
README.md CHANGED
@@ -25,12 +25,16 @@ tags:
25
  <p align="center">
26
  πŸ€– <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>
27
  πŸ€— <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
28
- πŸ–₯️ <a href="https://github.com/inclusionAI/Ling">GitHub</a>
29
  <p>
30
 
31
  ## Introduction
32
 
33
- Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8 billion parameters with 2.75 billion activated parameters. Ling-Coder-Lite performs impressively on coding tasks compared to existing models in the industry. Specifically, Ling-Coder-Lite further pre-training from an intermediate checkpoint of Ling-Lite, incorporating an additional 3 trillion tokens. This extended pre-training significantly boosts the coding abilities of Ling-Lite, while preserving its strong performance in general language tasks. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
 
 
 
 
34
 
35
  ## Model Downloads
36
 
@@ -44,6 +48,18 @@ You can download the following table to see the various parameters for your use
44
  | Ling-Coder-lite | 16.8B | 2.75B | 16K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite) |
45
  </div>
46
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ## Evaluation
48
 
49
  Detailed evaluation results are reported in our technical report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
@@ -93,7 +109,7 @@ print(response)
93
  ```
94
 
95
  ## Deployment
96
- Please refer to [Github](https://github.com/inclusionAI/Ling/blob/master/README.md)
97
 
98
  ## License
99
  This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-Coder-lite/blob/main/LICENCE).
@@ -103,7 +119,7 @@ This code repository is licensed under [the MIT License](https://huggingface.co/
103
  ```
104
  @misc{codefuse2025samplemattersleveragingmixtureofexperts,
105
  title={Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM},
106
- author={Codefuse and Ling Team and : and Wenting Cai and Yuchen Cao and Chaoyu Chen and Chen Chen and Siba Chen and Qing Cui and Peng Di and Junpeng Fang and Zi Gong and Ting Guo and Zhengyu He and Yang Huang and Cong Li and Jianguo Li and Zheng Li and Shijie Lian and BingChang Liu and Songshan Luo and Shuo Mao and Min Shen and Jian Wu and Jiaolong Yang and Wenjie Yang and Tong Ye and Hang Yu and Wei Zhang and Zhenduo Zhang and Hailin Zhao and Xunjin Zheng and Jun Zhou},
107
  year={2025},
108
  eprint={2503.17793},
109
  archivePrefix={arXiv},
 
25
  <p align="center">
26
  πŸ€– <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>
27
  πŸ€— <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
28
+ πŸ–₯️ <a href="https://github.com/codefuse-ai/Ling-Coder-Lite">GitHub</a>
29
  <p>
30
 
31
  ## Introduction
32
 
33
+ Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. This model demonstrates state-of-the-art performance on 12 coding benchmarks, while simultaneously offering competitive latency and throughput compared to code LLMs of similar size. In addition to open-sourcing the model itself, we also release a substantial amount of code-related data, including synthetic QA, SFT and DPO datasets. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
34
+
35
+ <p align="center">
36
+ <img src="./data-accuracy-efficiency.png" width="1000"/>
37
+ <p>
38
 
39
  ## Model Downloads
40
 
 
48
  | Ling-Coder-lite | 16.8B | 2.75B | 16K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite) |
49
  </div>
50
 
51
+ ## Dataset Downloads
52
+
53
+ <div align="center">
54
+
55
+ | **Model** | **Samples** | **Download** |
56
+ | :------------: | :----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------: |
57
+ | Ling-Coder-SyntheticQA | 24M | [πŸ€— HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-SyntheticQA) |
58
+ | Ling-Coder-SFT | 5M | [πŸ€— HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-SFT) |
59
+ | Ling-Coder-DPO | 250K | [πŸ€— HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-DPO) |
60
+
61
+ </div>
62
+
63
  ## Evaluation
64
 
65
  Detailed evaluation results are reported in our technical report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
 
109
  ```
110
 
111
  ## Deployment
112
+ Please refer to [Github](https://github.com/codefuse-ai/Ling-Coder-Lite/blob/master/README.md)
113
 
114
  ## License
115
  This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-Coder-lite/blob/main/LICENCE).
 
119
  ```
120
  @misc{codefuse2025samplemattersleveragingmixtureofexperts,
121
  title={Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM},
122
+ author={Codefuse and Ling Team},
123
  year={2025},
124
  eprint={2503.17793},
125
  archivePrefix={arXiv},