Qwen2.5-3B-Instruct-GPTQ-Int4

This version of Qwen2.5-3B-Instruct-GPTQ-Int4 has been converted to run on the Axera NPU using w4a16 quantization.

This model has been optimized with the following LoRA:

Compatible with Pulsar2 version: 3.4(Not released yet)

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/Qwen/Qwen2.5-3B-Instruct-GPTQ-Int4

Pulsar2 Link, How to Convert LLM from Huggingface to axmodel

AXera NPU LLM Runtime

Support Platform

Chips w8a16 w4a16
AX650 5 tokens/sec 10 tokens/sec

How to use

Download all files from this repository to the device

root@ax650:/mnt/qtang/llm-test/qwen2.5-3b# tree -L 1
.
β”œβ”€β”€ qwen2.5-3b-gptq-int4-ax650
β”œβ”€β”€ qwen2.5_tokenizer
β”œβ”€β”€ qwen2.5_tokenizer.py
β”œβ”€β”€ main_axcl_aarch64
β”œβ”€β”€ main_axcl_x86
β”œβ”€β”€ main_prefill
β”œβ”€β”€ post_config.json
β”œβ”€β”€ run_qwen2.5_3b_gptq_int4_ax650.sh
β”œβ”€β”€ run_qwen2.5_3b_gptq_int4_axcl_aarch64.sh
└── run_qwen2.5_3b_gptq_int4_axcl_x86.sh

Start the Tokenizer service

root@ax650:/mnt/qtang/llm-test/qwen2.5-3b# python qwen2.5_tokenizer.py --port 12345
None None 151645 <|im_end|>
<|im_start|>system
You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
<|im_start|>user
hello world<|im_end|>
<|im_start|>assistant

[151644, 8948, 198, 2610, 525, 1207, 16948, 11, 3465, 553, 54364, 14817, 13, 1446, 525, 264, 10950, 17847, 13, 151645, 198, 151644, 872, 198, 14990, 1879                                       , 151645, 198, 151644, 77091, 198]
http://localhost:12345

Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board

Open another terminal and run run_qwen2.5_3b_gptq_int4_ax650.sh

root@ax650:/mnt/qtang/llm-test/qwen2.5-3b# ./run_qwen2.5_3b_gptq_int4_ax650.sh
[I][                            Init][ 125]: LLM init start
[I][                            Init][  26]: LLaMaEmbedSelector use mmap
100% | β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ |  39 /  39 [19.30s<19.30s, 2.02 count/s] init post axmodel ok,remain_cmm(1811 MB)
[I][                            Init][ 241]: max_token_len : 1024
[I][                            Init][ 246]: kv_cache_size : 256, kv_cache_num: 1024
[I][                            Init][ 254]: prefill_token_num : 128
[I][                     load_config][ 281]: load config:
{
    "enable_repetition_penalty": false,
    "enable_temperature": true,
    "enable_top_k_sampling": true,
    "enable_top_p_sampling": false,
    "penalty_window": 20,
    "repetition_penalty": 1.2,
    "temperature": 0.9,
    "top_k": 10,
    "top_p": 0.8
}

[I][                            Init][ 268]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running

>> who are you
[I][                             Run][ 466]: ttft: 545.11 ms
I am Qwen, an artificial intelligence from Alibaba Cloud. I am here to assist you with any information or tasks you might have. How can I assist you                                             today?

[N][                             Run][ 605]: hit eos,avg 9.90 token/s

>> 1+1=?
[I][                             Run][ 466]: ttft: 545.63 ms
1+1 equals 2.

[N][                             Run][ 605]: hit eos,avg 9.85 token/s

Inference with M.2 Accelerator card

What is M.2 Accelerator card?, Show this DEMO based on Raspberry PI 5.

(base) axera@raspberrypi:~/samples/qwen2.5-3b $ ./run_qwen2.5_3b_gptq_int4_axcl_aarch64.sh
build time: Feb 13 2025 15:44:57
[I][                            Init][ 111]: LLM init start
100% | β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ |  39 /  39 [37.95s<37.95s, 1.03 count/s] init post axmodel ok remain_cmm(5391 MB)
[I][                            Init][ 226]: max_token_len : 1024
[I][                            Init][ 231]: kv_cache_size : 256, kv_cache_num: 1024
[I][                     load_config][ 282]: load config:
{
    "enable_repetition_penalty": false,
    "enable_temperature": true,
    "enable_top_k_sampling": true,
    "enable_top_p_sampling": false,
    "penalty_window": 20,
    "repetition_penalty": 1.2,
    "temperature": 0.9,
    "top_k": 10,
    "top_p": 0.8
}

[I][                            Init][ 288]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
>> who are you
I am Qwen, an artificial intelligence from Alibaba Cloud. I am here to assist you with your questions and help in any way I can. How can I assist you today?

[N][                             Run][ 610]: hit eos,avg 8.23 token/s

>> 1+1=?
1+1=2

[N][                             Run][ 610]: hit eos,avg 8.72 token/s

>> q

(base) axera@raspberrypi:~ $ axcl-smi
+------------------------------------------------------------------------------------------------+
| AXCL-SMI  V2.26.0_20250205130139                                Driver  V2.26.0_20250205130139 |
+-----------------------------------------+--------------+---------------------------------------+
| Card  Name                     Firmware | Bus-Id       |                          Memory-Usage |
| Fan   Temp                Pwr:Usage/Cap | CPU      NPU |                             CMM-Usage |
|=========================================+==============+=======================================|
|    0  AX650N                    V2.26.0 | 0000:01:00.0 |                174 MiB /      945 MiB |
|   --   43C                      -- / -- | 0%        0% |               1973 MiB /     7040 MiB |
+-----------------------------------------+--------------+---------------------------------------+

+------------------------------------------------------------------------------------------------+
| Processes:                                                                                     |
| Card      PID  Process Name                                                   NPU Memory Usage |
|================================================================================================|
|    0   470413  /home/axera/samples/qwen2.5-3b-gptq-int4/main_axcl_aarch64           1963704 KiB |
+------------------------------------------------------------------------------------------------+
Downloads last month
5
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for AXERA-TECH/Qwen2.5-3B-Instruct-GPTQ-Int4

Base model

Qwen/Qwen2.5-3B
Finetuned
(1)
this model

Collection including AXERA-TECH/Qwen2.5-3B-Instruct-GPTQ-Int4