Update README.md
Browse files
README.md
CHANGED
@@ -20,10 +20,10 @@ tags:
|
|
20 |
|
21 |
DeepSeek-R1 has been making headlines for rivaling OpenAI’s O1 reasoning model while remaining fully open-source. Many users want to run it locally to ensure data privacy, reduce latency, and maintain offline access. However, fitting such a large model onto personal devices typically requires quantization (e.g. Q4_K_M), which often sacrifices accuracy (up to ~22% accuracy loss) and undermines the benefits of the local reasoning model.
|
22 |
|
23 |
-
We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to
|
24 |
|
25 |
|
26 |
-
## NexaQunat Use Case
|
27 |
|
28 |
Here’s a comparison of how a standard Q4_K_M and NexaQuant-4Bit handle a common investment banking brain teaser question. NexaQuant excels in accuracy while shrinking the model file size by 4 times.
|
29 |
|
@@ -40,7 +40,7 @@ Right Answer: 47
|
|
40 |
|
41 |
## Benchmarks
|
42 |
|
43 |
-
NexaQuant
|
44 |
|
45 |
**Reasoning Capacity:**
|
46 |
|
@@ -65,7 +65,7 @@ NexaQuant on Reasoning Benchmarks Compared to BF16 and LMStudio's Q4_K_M
|
|
65 |
| **IFEval - Prom - Loose** | 13.86 | 10.29 | 15.71 |
|
66 |
| **IFEval - Prom - Strict** | 12.57 | 8.09 | 15.16 |
|
67 |
|
68 |
-
##
|
69 |
|
70 |
NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project. Below, we outline multiple ways to run the model locally.
|
71 |
|
@@ -112,7 +112,7 @@ Get the latest version from the [official website](https://lmstudio.ai/).
|
|
112 |
|
113 |
## What's next
|
114 |
|
115 |
-
1. This model is built for complex problem-solving, which is why it sometimes takes a long thinking process even for simple questions. We
|
116 |
|
117 |
2. Inference Nexa Quantized Deepseek-R1 distilled model on NPU.
|
118 |
|
@@ -123,3 +123,5 @@ If you liked our work, feel free to ⭐Star [Nexa's GitHub Repo](https://github.
|
|
123 |
Interested in running DeepSeek R1 on your own devices with optimized CPU, GPU, and NPU acceleration or compressing your finetuned DeepSeek-Distill-R1? [Let’s chat!](https://nexa.ai/book-a-call)
|
124 |
|
125 |
[Blogs](https://nexa.ai/blogs/deepseek-r1-nexaquant) | [Discord](https://discord.gg/nexa-ai) | [X(Twitter)](https://x.com/nexa_ai)
|
|
|
|
|
|
20 |
|
21 |
DeepSeek-R1 has been making headlines for rivaling OpenAI’s O1 reasoning model while remaining fully open-source. Many users want to run it locally to ensure data privacy, reduce latency, and maintain offline access. However, fitting such a large model onto personal devices typically requires quantization (e.g. Q4_K_M), which often sacrifices accuracy (up to ~22% accuracy loss) and undermines the benefits of the local reasoning model.
|
22 |
|
23 |
+
We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to 1/4 its original size—without losing any accuracy. Tests on an **HP Omnibook AIPC** with an **AMD Ryzen™ AI 9 HX 370 processor** showed a decoding speed of **66.40 tokens per second** and a peak RAM usage of just **1228 MB** in NexaQuant version—compared to only **25.28 tokens** per second and **3788 MB RAM** in the unquantized version—while NexaQuant **maintaining full precision model accuracy.**
|
24 |
|
25 |
|
26 |
+
## NexaQunat Use Case Demo
|
27 |
|
28 |
Here’s a comparison of how a standard Q4_K_M and NexaQuant-4Bit handle a common investment banking brain teaser question. NexaQuant excels in accuracy while shrinking the model file size by 4 times.
|
29 |
|
|
|
40 |
|
41 |
## Benchmarks
|
42 |
|
43 |
+
The benchmarks show that NexaQuant’s 4-bit model preserves the reasoning capacity of the original 16-bit model, delivering uncompromised performance in a significantly smaller memory & storage footprint.
|
44 |
|
45 |
**Reasoning Capacity:**
|
46 |
|
|
|
65 |
| **IFEval - Prom - Loose** | 13.86 | 10.29 | 15.71 |
|
66 |
| **IFEval - Prom - Strict** | 12.57 | 8.09 | 15.16 |
|
67 |
|
68 |
+
## Run locally
|
69 |
|
70 |
NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project. Below, we outline multiple ways to run the model locally.
|
71 |
|
|
|
112 |
|
113 |
## What's next
|
114 |
|
115 |
+
1. This model is built for complex problem-solving, which is why it sometimes takes a long thinking process even for simple questions. We recognized this and are working on improving it in the next update.
|
116 |
|
117 |
2. Inference Nexa Quantized Deepseek-R1 distilled model on NPU.
|
118 |
|
|
|
123 |
Interested in running DeepSeek R1 on your own devices with optimized CPU, GPU, and NPU acceleration or compressing your finetuned DeepSeek-Distill-R1? [Let’s chat!](https://nexa.ai/book-a-call)
|
124 |
|
125 |
[Blogs](https://nexa.ai/blogs/deepseek-r1-nexaquant) | [Discord](https://discord.gg/nexa-ai) | [X(Twitter)](https://x.com/nexa_ai)
|
126 |
+
|
127 |
+
Join Discord server for help and discussion.
|