Text Generation
Transformers
Safetensors
English
Chinese
bailing_moe
code
Mixture of Experts
conversational
custom_code
twelveand0 commited on
Commit
9086497
·
verified ·
1 Parent(s): 8698812

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -30,9 +30,7 @@ tags:
30
 
31
  ## Introduction
32
 
33
- This repository contains the model described in the paper [Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM](https://huggingface.co/papers/2503.17793).
34
-
35
- Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8 billion parameters with 2.75 billion activated parameters. Ling-Coder-Lite performs impressively on coding tasks compared to existing models in the industry. Specifically, Ling-Coder-Lite further pre-training from an intermediate checkpoint of Ling-Lite, incorporating an additional 3 trillion tokens. This extended pre-training significantly boosts the coding abilities of Ling-Lite, while preserving its strong performance in general language tasks.
36
 
37
  ## Model Downloads
38
 
 
30
 
31
  ## Introduction
32
 
33
+ Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8 billion parameters with 2.75 billion activated parameters. Ling-Coder-Lite performs impressively on coding tasks compared to existing models in the industry. Specifically, Ling-Coder-Lite further pre-training from an intermediate checkpoint of Ling-Lite, incorporating an additional 3 trillion tokens. This extended pre-training significantly boosts the coding abilities of Ling-Lite, while preserving its strong performance in general language tasks. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
 
 
34
 
35
  ## Model Downloads
36