Model name:
Violet_Twilight-v0.2

Description:
"Now for something a bit different, Violet_Twilight-v0.2! This model is a SLERP merge of Azure_Dusk-v0.2 and Crimson_Dawn-v0.2!"
– by Author.

Use the ChatML prompt format.

[Added]
ARM quants:
"Q4_0_4_4", "Q4_0_4_8", "Q4_0_8_8"

Presets:
You can use ChatML presets within SillyTavern and adjust from there.
Alternatively, check out Virt-io's ChatML v1.9 presets here, make sure you read the repository page for how to use them properly.
The author also provides links to custom sampler presets in the model page here.

Original model page:
https://huggingface.co/Epiculous/Violet_Twilight-v0.2

Quantized using llama.cpp-b3829:

1. Base⇢ Convert-GGUF(FP16)⇢ Generate-Imatrix-Data(FP16)
2. Base⇢ Convert-GGUF(BF16)⇢ Use-Imatrix-Data(FP16)⇢ Quantize-GGUF(Imatrix-Quants)

waifu-model-image/png

Downloads last month
3,672
GGUF
Model size
12.2B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for Lewdiculous/Violet_Twilight-v0.2-GGUF-IQ-Imatrix

Quantized
(13)
this model

Collection including Lewdiculous/Violet_Twilight-v0.2-GGUF-IQ-Imatrix