File size: 4,927 Bytes
47add84 8698812 47add84 8698812 47add84 8698812 6ae5691 265c05f 6ae5691 2aefd27 6ae5691 2aefd27 6ae5691 3b965e3 e83c3b8 6ae5691 2aefd27 6ae5691 4a8647a 6ae5691 2aefd27 6ae5691 070bdc3 6ae5691 0c94323 2aefd27 0c94323 8698812 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
base_model:
- inclusionAI/Ling-Coder-lite-base
datasets:
- inclusionAI/Ling-Coder-SFT
- inclusionAI/Ling-Coder-SyntheticQA
- inclusionAI/Ling-Coder-DPO
language:
- en
- zh
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- code
- moe
---
# Ling-Coder-lite
<p align="center">
<img src="https://huggingface.co/inclusionAI/Ling-lite/resolve/main/ant-bailing.png" width="100"/>
<p>
<p align="center">
π€ <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>
π€ <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
π₯οΈ <a href="https://github.com/codefuse-ai/Ling-Coder-Lite">GitHub</a>
<p>
## Introduction
Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. This model demonstrates state-of-the-art performance on 12 coding benchmarks, while simultaneously offering competitive latency and throughput compared to code LLMs of similar size. In addition to open-sourcing the model itself, we also release a substantial amount of code-related data, including synthetic QA, SFT and DPO datasets. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
## Model Downloads
You can download the following table to see the various parameters for your use case. If you are located in mainland China, we also provide the model on modelscope.cn to speed up the download process.
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
| Ling-Coder-lite-base | 16.8B | 2.75B | 16K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite-base) |
| Ling-Coder-lite | 16.8B | 2.75B | 16K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite) |
| Ling-Coder-lite-GPTQ-Int8 | 16.8B | 2.75B | 16K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite-GPTQ-Int8) |
</div>
## Dataset Downloads
<div align="center">
| **Model** | **Samples** | **Download** |
| :------------: | :----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------: |
| Ling-Coder-SyntheticQA | 24M | [π€ HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-SyntheticQA) |
| Ling-Coder-SFT | 5M | [π€ HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-SFT) |
| Ling-Coder-DPO | 250K | [π€ HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-DPO) |
</div>
## Evaluation
Detailed evaluation results are reported in our technical report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
## Quickstart
### π€ Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-Coder-lite"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True
)
prompt = "Write a quick sort algorithm in python."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Deployment
Please refer to [Github](https://github.com/codefuse-ai/Ling-Coder-Lite/blob/master/README.md)
## License
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-Coder-lite/blob/main/LICENCE).
## Citation
```
@misc{codefuse2025samplemattersleveragingmixtureofexperts,
title={Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM},
author={Codefuse and Ling Team},
year={2025},
eprint={2503.17793},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.17793},
}
``` |