About

static quants of https://huggingface.co/EpistemeAI/Athene-Phi-3.5-mini-instruct-orpo

weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF IQ3_XS 1.7
GGUF IQ3_S 1.8 beats Q3_K*
GGUF IQ3_M 1.9
GGUF Q4_0_4_4 2.3 fast on arm, low quality
PART 1 PART 2 Q2_K 3.0
PART 1 PART 2 Q3_K_S 3.5
PART 1 PART 2 Q3_K_M 3.9 lower quality
PART 1 PART 2 Q3_K_L 4.2
PART 1 PART 2 IQ4_XS 4.3
PART 1 PART 2 Q4_K_S 4.5 fast, recommended
PART 1 PART 2 Q4_K_M 4.7 fast, recommended
PART 1 PART 2 Q5_K_S 5.4
PART 1 PART 2 Q5_K_M 5.5
PART 1 PART 2 Q6_K 6.4 very good quality
PART 1 PART 2 Q8_0 8.2 fast, best quality
PART 1 PART 2 f16 15.4 16 bpw, overkill

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.

Downloads last month
430
GGUF
Model size
3.82B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for mradermacher/Athene-Phi-3.5-mini-instruct-orpo-GGUF

Quantized
(3)
this model