Update README.md
#2
by
yli-nexa4ai
- opened
README.md
CHANGED
@@ -65,4 +65,62 @@ Get the latest version from the [official website](https://lmstudio.ai/).
|
|
65 |
1. In LM Studio's top panel, search for and select `NexaAIDev/DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant`.
|
66 |
2. Click `Download` (if not already downloaded) and wait for the model to load.
|
67 |
3. Once loaded, go to the chat window and start a conversation.
|
68 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
1. In LM Studio's top panel, search for and select `NexaAIDev/DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant`.
|
66 |
2. Click `Download` (if not already downloaded) and wait for the model to load.
|
67 |
3. Once loaded, go to the chat window and start a conversation.
|
68 |
+
---
|
69 |
+
|
70 |
+
## Example
|
71 |
+
|
72 |
+
On the left, we have an example of what LMStudio Q4_K_M responded. On the right is our NexaQuant version.
|
73 |
+
|
74 |
+
Prompt: A Common Investment Banking BrainTeaser Question
|
75 |
+
|
76 |
+
There is a 6x8 rectangular chocolate bar made up of small 1x1 bits. We want to break it into the 48 bits. We can break one piece of chocolate horizontally or vertically, but cannot break two pieces together! What is the minimum number of breaks required?
|
77 |
+
|
78 |
+
Right Answer: 47
|
79 |
+
|
80 |
+
<div align="center">
|
81 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/66abfd6f65beb23afa427d8a/ZS9e66t7OhBIno4eQ3OaX.png" width="80%" alt="Example" />
|
82 |
+
</div>
|
83 |
+
|
84 |
+
## Benchmarks
|
85 |
+
|
86 |
+
NexaQuant on Reasoning Benchmarks Compared to BF16 and LMStudio's Q4_K_M
|
87 |
+
|
88 |
+
**1.5B:**
|
89 |
+
|
90 |
+
<div align="center">
|
91 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/66abfd6f65beb23afa427d8a/Cyh1zVvDHNBT598IkLHkd.png" width="80%" alt="Example" />
|
92 |
+
</div>
|
93 |
+
|
94 |
+
The general capacity has also greatly improved:
|
95 |
+
|
96 |
+
**1.5B:**
|
97 |
+
|
98 |
+
| Benchmark | Full 16-bit | llama.cpp (4-bit) | NexaQuant (4-bit)|
|
99 |
+
|----------------------------|------------|-------------------|-------------------|
|
100 |
+
| **HellaSwag** | 35.81 | 34.31 | 34.60 |
|
101 |
+
| **MMLU** | 37.31 | 35.49 | 37.41 |
|
102 |
+
| **Humanities** | 31.86 | 34.87 | 30.97 |
|
103 |
+
| **Social Sciences** | 41.50 | 38.17 | 42.09 |
|
104 |
+
| **STEM** | 38.60 | 35.74 | 39.26 |
|
105 |
+
| **ARC Easy** | 67.55 | 54.20 | 65.53 |
|
106 |
+
| **MathQA** | 41.04 | 28.51 | 39.87 |
|
107 |
+
| **PIQA** | 65.56 | 61.70 | 65.07 |
|
108 |
+
| **IFEval - Inst - Loose** | 25.06 | 24.77 | 28.54 |
|
109 |
+
| **IFEval - Inst - Strict** | 23.62 | 22.94 | 27.94 |
|
110 |
+
| **IFEval - Prom - Loose** | 13.86 | 10.29 | 15.71 |
|
111 |
+
| **IFEval - Prom - Strict** | 12.57 | 8.09 | 15.16 |
|
112 |
+
|
113 |
+
|
114 |
+
## What's next
|
115 |
+
|
116 |
+
1. Inference Nexa Quantized Deepseek-R1 distilled model on NPU.
|
117 |
+
|
118 |
+
2. This model is designed for complex problem-solving, which is why it has a longer thinking process. We understand this can be an issue in some cases, and we're actively working on improvements.
|
119 |
+
|
120 |
+
### Follow us
|
121 |
+
|
122 |
+
If you liked our work, feel free to ⭐Star [Nexa's GitHub Repo](https://github.com/NexaAI/nexa-sdk).
|
123 |
+
|
124 |
+
Interested in running DeepSeek R1 on your own devices with optimized CPU, GPU, and NPU acceleration or compressing your finetuned DeepSeek-Distill-R1? [Let’s chat!](https://nexa.ai/book-a-call)
|
125 |
+
|
126 |
+
[Blogs](https://nexa.ai/blogs/quantized-deepseek-r1-on-device) | [Discord](https://discord.gg/nexa-ai) | [X(Twitter)](https://x.com/nexa_ai)
|