Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
|
| 3 |
+
## license: mit
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
|
| 7 |
+
**DeepSeek** developed and released the **DeepSeek-R1** series, featuring multiple model sizes fine-tuned for high-performance text generation. These models are optimized for dialogue, reasoning, and information-seeking tasks, providing a balance of efficiency and accuracy while maintaining a smaller footprint compared to their original counterparts.
|
| 8 |
+
|
| 9 |
+
The DeepSeek-R1 models include distilled and full-scale variants of both **Qwen** and **Llama** architectures, catering to various applications such as customer support, conversational AI, research, and enterprise automation.
|
| 10 |
+
|
| 11 |
+
## Variants
|
| 12 |
+
|
| 13 |
+
### DeepSeek-R1
|
| 14 |
+
|
| 15 |
+
| No | Variant | Branch | Cortex CLI command |
|
| 16 |
+
| -- | ---------------------------------------------------------------------------------------------- | ------- | ------------------------------------------ |
|
| 17 |
+
| 1 | [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/cortexso/deepseek-r1/tree/1.5b) | 1.5b | `cortex run deepseek-r1-distill-qwen-1.5b` |
|
| 18 |
+
| 2 | [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/cortexso/deepseek-r1/tree/7b) | 7b | `cortex run deepseek-r1-distill-qwen-7b` |
|
| 19 |
+
| 3 | [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/cortexso/deepseek-r1/tree/8b) | 8b | `cortex run deepseek-r1-distill-llama-8b` |
|
| 20 |
+
| 4 | [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/cortexso/deepseek-r1/tree/14b) | 14b | `cortex run deepseek-r1-distill-qwen-14b` |
|
| 21 |
+
| 5 | [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/cortexso/deepseek-r1/tree/32b) | 32b | `cortex run deepseek-r1-distill-qwen-32b` |
|
| 22 |
+
| 6 | [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/cortexso/deepseek-r1/tree/70b) | 70b | `cortex run deepseek-r1-distill-llama-70b` |
|
| 23 |
+
|
| 24 |
+
Each branch contains a default quantized version:
|
| 25 |
+
- **Qwen-1.5B:** q4-km
|
| 26 |
+
- **Qwen-7B:** q4-km
|
| 27 |
+
- **Llama-8B:** q4-km
|
| 28 |
+
- **Qwen-14B:** q4-km
|
| 29 |
+
- **Qwen-32B:** q4-km
|
| 30 |
+
- **Llama-70B:** q4-km
|
| 31 |
+
|
| 32 |
+
## Use it with Jan (UI)
|
| 33 |
+
|
| 34 |
+
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
|
| 35 |
+
2. Use in Jan model Hub:
|
| 36 |
+
```text
|
| 37 |
+
cortexso/deepseek-r1 [WIP]
|
| 38 |
+
cortexso/deepseek-r1 [WIP]
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
## Use it with Cortex (CLI)
|
| 42 |
+
|
| 43 |
+
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
|
| 44 |
+
2. Run the model with command:
|
| 45 |
+
```bash
|
| 46 |
+
cortex run [WIP]
|
| 47 |
+
```
|
| 48 |
+
or
|
| 49 |
+
```bash
|
| 50 |
+
cortex run [WIP]
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## Credits
|
| 54 |
+
|
| 55 |
+
- **Author:** DeepSeek
|
| 56 |
+
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
|
| 57 |
+
- **Original License:** [License](https://huggingface.co/deepseek-ai/DeepSeek-R1#license)
|
| 58 |
+
- **Papers:** [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/html/2501.12948v1)
|
| 59 |
+
|