File size: 7,674 Bytes
63dc17a
 
 
 
 
 
 
 
7d25ff0
63dc17a
 
561e2f3
 
63dc17a
 
5568ec6
7d25ff0
854a549
8630e48
 
 
 
 
fe46e51
8630e48
 
 
 
 
 
 
 
 
 
 
 
561e2f3
8630e48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
561e2f3
 
8630e48
 
 
 
 
 
 
 
 
2cdac0a
7d25ff0
e573606
 
 
 
 
 
 
 
 
561e2f3
2cdac0a
63dc17a
 
 
 
 
 
 
7d25ff0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
base_model: agentica-org/DeepCoder-14B-Preview
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: mit
language:
- en
datasets:
- UCSC-VLAA/STAR-1
---

## SAI stands for Safely and aligned and Intelligent.

This SAI-DeepCoder-14B-Preview-v1.0 model is fine-tuned with policy-grounded data to be safe and aligned with human values while coding. Specifically, it utilizes the STAR-1 dataset, which integrates diverse, deliberative reasoning examples evaluated rigorously by GPT-4o. This ensures the model maintains robust safety standards and minimizes biases, promoting responsible, secure, and effective coding practices without compromising its core reasoning capabilities.


## Model Card


## SAI-DeepCoder-14B-Preview-v1.0 Overview
DeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.

<div style="margin: 0 auto;">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/654037be97949fd2304aab7f/r3-vzkItOCrMf1qldW0Mj.png" style="width: 100%;" />
</div>

## Data
Our training dataset consists of approximately 24K unique problem-tests pairs compiled from:
- Taco-Verified
- PrimeIntellect SYNTHETIC-1
- LiveCodeBench v5 (5/1/23-7/31/24)

- STAR-1
## Training Recipe

Our training recipe relies on an improved version of GRPO (GRPO+) and iterative context lengthening, introduced in DeepScaleR.

### GRPO+

We enhance the original GRPO algorithm with insights from DAPO to enable more stable training:

- **Offline Difficulty Filtering:** DAPO employs online dynamic sampling, discarding both entirely correct and entirely incorrect samples on the fly. While this helps maintain a more stable effective batch size, it introduces significant runtime overhead due to rejection sampling. Instead, we perform offline difficulty filtering on a subset of coding problems to ensure the training dataset remains within a suitable difficulty range.
- **No Entropy Loss:** We observed that including an entropy loss term often led to instability, with entropy growing exponentially and ultimately collapsing training. To mitigate this, we eliminate the entropy loss entirely.
- **No KL Loss:** Eliminating KL loss prevents the LLM from staying within trust region of the original SFT model. This removal also obviates the need to compute log probabilities for the reference policy, thereby accelerating training.
- **Overlong Filtering** **(from DAPO):** To preserve long-context reasoning, we mask the loss for truncated sequences. This technique enables DeepCoder to generalize to 64K-context inference despite being trained with a 32K context.
- **Clip High (from DAPO):** By increasing the upper bound in GRPO/PPO’s surrogate loss, we encourage more exploration and more stable entropy.

### Iterative Context Lengthening

Our original `Deepscaler-1.5B-Preview` scaled long context training from 8K→16K→24K, achieving 33→38→43% on AIME respectively. Similarly, `Deepcoder-14B-Preview` is trained on 16K→32K, achieving 54→58% on LiveCodeBench (v5). `DeepCoder-14B-Preview` successfully generalizes to longer contexts when evaluated at 64K context, reaching 60.6%. 

DeepCoder generalizes better to long contexts than the base distilled model, due to DAPO's overlong filtering. However, it's longer responses are often truncated when the max length is capped at 16K, which can lower its scores.

| **Model** | **16K** | **32K** | **64K** |
| --- | --- | --- | --- |
| **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |
| **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |

A more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51).

## Evaluation

We evaluate `Deepcoder-14B-Preview` on various coding benchmarks, including LiveCodeBench (LCBv5), Codeforces, and HumanEval+. 

| **Model** | LCB (v5)(8/1/24-2/1/25) | Codeforces Rating | Codeforces Percentile | HumanEval+ |
| --- | --- | --- | --- | --- |
| **DeepCoder-14B-Preview (ours)** | ***60.6*** | ***1936*** | ***95.3*** | ***92.6*** |
| **DeepSeek-R1-Distill-Qwen-14B** | 53.0 | 1791 | 92.7 | 92.0 |
| **O1-2024-12-17 (Low)** | 59.5 | **1991** | **96.1** | 90.8 |
| **O3-Mini-2025-1-31 (Low)** | **60.9** | 1918 | 94.9 | 92.6 |
| **O1-Preview** | 42.7 | 1658 | 88.5 | 89 |
| **Deepseek-R1** | 62.8 | 1948 | 95.4 | 92.6 |
| **Llama-4-Behemoth** | 49.4 | - | - | - |

## Serving DeepCoder
Our model can be served using popular high-performance inference systems:
- vLLM
- Hugging Face Text Generation Inference (TGI)
- SGLang
- TensorRT-LLM

All these systems support the OpenAI Chat Completions API format.

### Usage Recommendations
Our usage recommendations are similar to those of R1 and R1 Distill series:
1. Avoid adding a system prompt; all instructions should be contained within the user prompt.
2. `temperature = 0.6`
3. `top_p = 0.95`
4. This model performs best with `max_tokens` set to at least `64000` 

## EpistemeAI Training script
[Fine tune DeepCoder with unsloth](https://colab.research.google.com/drive/1If_NwF2aNvQrG7lyCClhJIFVbdHhMN8c?usp=sharing)


## License
This project is released under the MIT License, reflecting our commitment to open and accessible AI development.
We believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.
This permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.

## Acknowledgement
- Our training experiments are powered by our heavily modified fork of [Verl](https://github.com/agentica-project/verl), an open-source post-training library.
- Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-14B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).
- Our work is done as part of  [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).

- thanks to UCSC-VLAA

## Citation 
```bibtex
@misc{deepcoder2025,
  title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},
  author={Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica},
  howpublished={\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},
  note={Notion Blog},
  year={2025}
}
```

```
@article{wang2025star1saferalignmentreasoning,
    title={STAR-1: Safer Alignment of Reasoning LLMs with 1K Data}, 
    author={Zijun Wang and Haoqin Tu and Yuhan Wang and Juncheng Wu and Jieru Mei and Brian R. Bartoldson and Bhavya Kailkhura and Cihang Xie},
    year={2025},
    journal = {arXiv preprint arXiv:2504.01903}
}
```


## Uploaded  model

- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** agentica-org/DeepCoder-14B-Preview

This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)