Suparious's picture
Added base_model tag in README.md
1f3893a verified
metadata
base_model: Locutusque/llama-3-neural-chat-v2.2-8B
inference: false
license: apache-2.0
language:
  - en
pipeline_tag: text-generation
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible
library_name: transformers
quantized_by: Suparious

Locutusque/llama-3-neural-chat-v2.2-8B AWQ

image/png

Model Details

I fine-tuned llama-3 8B on an approach similar to Intel's neural chat language model. I have slightly modified the data sources so it is stronger in coding, math, and writing. I use both SFT and DPO-Positive. DPO-Positive dramatically improves performance over DPO.

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by: