license: apache-2.0 | |
# CorticalStack/mistral-7b-openhermes-gptq | |
CorticalStack/mistral-7b-openhermes-gptq is an GPTQ quantised version of [CorticalStack/mistral-7b-openhermes-sft](https://huggingface.co/CorticalStack/mistral-7b-openhermes-sft). | |
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). MacOS users: please use GGUF models. | |
These GPTQ models are known to work in the following inference servers/webuis. | |
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) | |
- [KoboldAI United](https://github.com/henk717/koboldai) | |
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) | |
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) |