Experimental Builds in F32 and Imatrix upscaling for 20B model.

Mix of req quant / imatrix quants.

These are attempts to further improve F32 upscales projects noted below.

Use at own risk as output may be NSFW.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 2" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Related to F32 Master Upscale project:

https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF

https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF-imatrix

In part related to:

https://huggingface.co/DavidAU/Dark-Forest-V1-Ultra-Quality-20b-GGUF

https://huggingface.co/DavidAU/Fimbulvetr-11B-Ultra-Quality-plus-imatrix-GGUF

Downloads last month
59
GGUF
Model size
20B params
Architecture
llama

4-bit

5-bit

6-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including DavidAU/Psyonic-Cetacean-EXP