THOTH Experiment

Completed model @ https://huggingface.co/IntelligentEstate/Thoth_Warding-Llama-3B-IQ5_K_S-GGUF

thoth2.png

Model is Experimental Imatrix Quant using "THE_KEY" Dataset in QAT

This model was converted to GGUF format from NousResearch/Hermes-3-Llama-3.2-3B using llama.cpp. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

Downloads last month
269
GGUF
Model size
3.21B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for fuzzy-mittenz/Thoth-Llama3.2-3B-IQ4_NL-GGUF

Quantized
(28)
this model