roleplaiapp/YuE-s1-7B-anneal-en-cot-Q4_K_S-GGUF

Repo: roleplaiapp/YuE-s1-7B-anneal-en-cot-Q4_K_S-GGUF Original Model: YuE-s1-7B-anneal-en-cot Quantized File: YuE-s1-7B-anneal-en-cot-Q4_K_S.gguf Quantization: GGUF Quantization Method: Q4_K_S

Overview

This is a GGUF Q4_K_S quantized version of YuE-s1-7B-anneal-en-cot

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
20
GGUF
Model size
6.22B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.