yli-nexa4ai's picture
Update README.md
a1f6b26 verified
|
raw
history blame
5.72 kB
metadata
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
license: apache-2.0
tags:
  - deepseek
  - qwen
  - qwen2
  - transformers
  - GGUF

DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant

NexaQuant

Background + Overview

DeepSeek-R1 has been making headlines for rivaling OpenAI’s O1 reasoning model while remaining fully open-source. Many users want to run it locally to ensure data privacy, reduce latency, and maintain offline access. However, fitting such a large model onto personal devices typically requires quantization (e.g. Q4_K_M), which often sacrifices accuracy (up to ~22% accuracy loss) and undermines the benefits of the local reasoning model.

We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to one-fourth its original size—without losing any accuracy. This lets you run powerful on-device reasoning wherever you are, with no compromises. Tests on an HP Omnibook AIPC with an AMD Ryzen™ AI 9 HX 370 processor showed a decoding speed of 66.40 tokens per second and a peak RAM usage of just 1228 MB in NexaQuant version—compared to only 25.28 tokens per second and 3788 MB RAM in the unquantized version—while maintaining full precision model accuracy.

How to run locally

NexaQuant is compatible with Nexa-SDK, Ollama, LM Studio, Llama.cpp, and any llama.cpp based project. Below, we outline multiple ways to run the model locally.

Option 1: Using Nexa SDK

Step 1: Install Nexa SDK

Follow the installation instructions in Nexa SDK's GitHub repository.

Step 2: Run the model with Nexa

Execute the following command in your terminal:

nexa run DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant:q4_0

Option 2: Using llama.cpp

Step 1: Build llama.cpp on Your Device

Follow the "Building the project" instructions in the llama.cpp repository to build the project.

Step 2: Run the Model with llama.cpp

Once built, run llama-cli under <build_dir>/bin/:

./llama-cli \
    --model your/local/path/to/DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant \
    --prompt 'Provide step-by-step reasoning enclosed in <think> </think> tags, followed by the final answer enclosed in \boxed{} tags.' \

Option 3: Using LM Studio

Step 1: Download and Install LM Studio

Get the latest version from the official website.

Step 2: Load and Run the Model

  1. In LM Studio's top panel, search for and select NexaAIDev/DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant.
  2. Click Download (if not already downloaded) and wait for the model to load.
  3. Once loaded, go to the chat window and start a conversation.

Example

On the left, we have an example of what LMStudio Q4_K_M responded. On the right is our NexaQuant version.

Prompt: A Common Investment Banking BrainTeaser Question

There is a 6x8 rectangular chocolate bar made up of small 1x1 bits. We want to break it into the 48 bits. We can break one piece of chocolate horizontally or vertically, but cannot break two pieces together! What is the minimum number of breaks required?

Right Answer: 47

Example

Benchmarks

NexaQuant on Reasoning Benchmarks Compared to BF16 and LMStudio's Q4_K_M

1.5B:

Example

The general capacity has also greatly improved:

1.5B:

Benchmark Full 16-bit llama.cpp (4-bit) NexaQuant (4-bit)
HellaSwag 35.81 34.31 34.60
MMLU 37.31 35.49 37.41
Humanities 31.86 34.87 30.97
Social Sciences 41.50 38.17 42.09
STEM 38.60 35.74 39.26
ARC Easy 67.55 54.20 65.53
MathQA 41.04 28.51 39.87
PIQA 65.56 61.70 65.07
IFEval - Inst - Loose 25.06 24.77 28.54
IFEval - Inst - Strict 23.62 22.94 27.94
IFEval - Prom - Loose 13.86 10.29 15.71
IFEval - Prom - Strict 12.57 8.09 15.16

What's next

  1. Inference Nexa Quantized Deepseek-R1 distilled model on NPU.

  2. This model is designed for complex problem-solving, which is why it has a longer thinking process. We understand this can be an issue in some cases, and we're actively working on improvements.

Follow us

If you liked our work, feel free to ⭐Star Nexa's GitHub Repo.

Interested in running DeepSeek R1 on your own devices with optimized CPU, GPU, and NPU acceleration or compressing your finetuned DeepSeek-Distill-R1? Let’s chat!

Blogs | Discord | X(Twitter)