CorticalStack's picture
Upload folder using huggingface_hub
f446350 verified
metadata
license: apache-2.0

CorticalStack/mistral-7b-openhermes-awq

CorticalStack/mistral-7b-openhermes-awq is an AWQ quantised version of CorticalStack/mistral-7b-openhermes-sft.

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

AWQ configuration

  • Zero point: True
  • Q group size: 128
  • W bit: 4
  • Version: GEMM