nielsr's picture
nielsr HF Staff
Add link to paper and GitHub repository
c3ae9dc verified
|
raw
history blame
4.35 kB
---
datasets:
- inclusionAI/Ling-Coder-SyntheticQA
language:
- en
- zh
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- code
- moe
---
# Ling-Coder-lite-base
<p align="center">
<img src="https://huggingface.co/inclusionAI/Ling-lite/resolve/main/ant-bailing.png" width="100"/>
<p>
<p align="center">
πŸ€– <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>
πŸ€— <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
πŸ–₯️ <a href="https://github.com/inclusionAI/Ling">GitHub</a>
<p>
## Introduction
Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8 billion parameters with 2.75 billion activated parameters. Ling-Coder-Lite performs impressively on coding tasks compared to existing models in the industry. Specifically, Ling-Coder-Lite further pre-training from an intermediate checkpoint of Ling-Lite, incorporating an additional 3 trillion tokens. This extended pre-training significantly boosts the coding abilities of Ling-Lite, while preserving its strong performance in general language tasks. This model is described in the paper [Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM](https://huggingface.co/papers/2503.17793).
## Model Downloads
You can download the following table to see the various parameters for your use case. If you are located in mainland China, we also provide the model on modelscope.cn to speed up the download process.
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
| Ling-Coder-lite-base | 16.8B | 2.75B | 16K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite-base) |
| Ling-Coder-lite | 16.8B | 2.75B | 16K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite) |
</div>
## Evaluation
Detailed evaluation results are reported in our technical report [TBD].
## Quickstart
### πŸ€— Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-Coder-lite"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True
)
prompt = "Write a quick sort algorithm in python."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Deployment
Please refer to [Github](https://github.com/inclusionAI/Ling/blob/master/README.md)
## License
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-Coder-lite/blob/main/LICENCE).
## Citation
```
@misc{codefuse2025samplemattersleveragingmixtureofexperts,
title={Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM},
author={Codefuse and Ling Team and : and Wenting Cai and Yuchen Cao and Chaoyu Chen and Chen Chen and Siba Chen and Qing Cui and Peng Di and Junpeng Fang and Zi Gong and Ting Guo and Zhengyu He and Yang Huang and Cong Li and Jianguo Li and Zheng Li and Shijie Lian and BingChang Liu and Songshan Luo and Shuo Mao and Min Shen and Jian Wu and Jiaolong Yang and Wenjie Yang and Tong Ye and Hang Yu and Wei Zhang and Zhenduo Zhang and Hailin Zhao and Xunjin Zheng and Jun Zhou},
year={2025},
eprint={2503.17793},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.17793},
}
```