cgus commited on
Commit
be1aeaf
·
verified ·
1 Parent(s): 7e71351

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -4
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
- license: mit
3
- library_name: transformers
4
  datasets:
5
  - PrimeIntellect/verifiable-coding-problems
6
  - likaixin/TACO-verified
@@ -8,10 +7,26 @@ datasets:
8
  language:
9
  - en
10
  base_model:
11
- - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
12
  pipeline_tag: text-generation
13
  ---
14
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  <div align="center">
16
  <span style="font-family: default; font-size: 1.5em;">DeepCoder-14B-Preview</span>
17
  <div>
 
1
  ---
2
+ library_name: exllamav2
 
3
  datasets:
4
  - PrimeIntellect/verifiable-coding-problems
5
  - likaixin/TACO-verified
 
7
  language:
8
  - en
9
  base_model:
10
+ - agentica-org/DeepCoder-14B-Preview
11
  pipeline_tag: text-generation
12
  ---
13
+ # DeepCoder-14B-Preview-exl2
14
+ Original model: [DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) by [Agentica](https://huggingface.co/agentica-org)
15
+ Based on: [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) by [DeepSeek](https://huggingface.co/deepseek-ai)
16
+ Foundation model: [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) by [Qwen](https://huggingface.co/Qwen)
17
+
18
+ ## Quants
19
+ [4bpw h6 (main)](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/main)
20
+ [4.5bpw h6](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/4.5bpw-h6)
21
+ [5bpw h6](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/5bpw-h6)
22
+ [6bpw h6](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/6bpw-h6)
23
+ [8bpw h8](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/8bpw-h8)
24
+ ## Quantization notes
25
+ Made with Exllamav2 0.2.8 with default dataset.
26
+ It can be used with TabbyAPI, Text-Generation-WebUI and requires RTX GPU on Windows or RTX/ROCm on Linux.
27
+ RAM offloading isn't supported natively, so make sure it fits your GPU VRAM.
28
+ I'd recommend at least a 12GB GPU for 4-5bpw quants.
29
+ # Original model card
30
  <div align="center">
31
  <span style="font-family: default; font-size: 1.5em;">DeepCoder-14B-Preview</span>
32
  <div>