llama-3.2-3b-onnx-qnn
llama-3.2-3b-onnx-qnn is an ONNX QNN int4 quantized version of Llama 3.2 3B Instruct, providing a very small, very fast inference implementation, optimized for NPU deployment on Windows ARM64 AI PCs with Snapdragon Elite X NPU processors.
llama-3.2-3b-instruct is a new 3B chat foundation model from Meta.
Model Description
- Developed by: meta-llama
- Quantized by: llmware
- Model type: llama-3.2
- Parameters: 3 billion
- Model Parent: meta-llama/Meta-Llama-3.2-3B-Instruct
- Language(s) (NLP): English
- License: Llama 3.2 Community License
- Uses: General chat use cases
- RAG Benchmark Accuracy Score: NA
- Quantization: int4
Model Card Contact
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model authors have turned it off explicitly.
Model tree for llmware/llama-3.2-3b-onnx-qnn
Base model
meta-llama/Llama-3.2-3B-Instruct