|
--- |
|
language: |
|
- en |
|
license: llama3 |
|
tags: |
|
- Llama-3 |
|
- instruct |
|
- finetune |
|
- chatml |
|
- gpt4 |
|
- synthetic data |
|
- distillation |
|
- function calling |
|
- json mode |
|
- axolotl |
|
- roleplaying |
|
- chat |
|
- llama-cpp |
|
base_model: NousResearch/Hermes-3-Llama-3.2-3B |
|
widget: |
|
- example_title: Hermes 3 |
|
messages: |
|
- role: system |
|
content: You are a sentient, superintelligent artificial general intelligence, |
|
here to teach and assist me. |
|
- role: user |
|
content: Write a short story about Goku discovering kirby has teamed up with Majin |
|
Buu to destroy the world. |
|
library_name: transformers |
|
model-index: |
|
- name: Hermes-3-Llama-3.2-3B |
|
results: [] |
|
--- |
|
|
|
# THOTH Experiment (This model is a small Quant of the IE model) |
|
Completed and Q5 Imatrix model Is at = [THOTH](https://huggingface.co/IntelligentEstate/Thoth_Warding-Llama-3B-IQ5_K_S-GGUF) |
|
|
|
|
|
![thoth2.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/5hpy4IHflPFigikhFPAKj.png) |
|
|
|
# Model is Experimental Imatrix Quant using "THE_KEY" Dataset in QAT |
|
This model was converted to GGUF format from [`NousResearch/Hermes-3-Llama-3.2-3B`](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B) using llama.cpp. |
|
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B) for more details on the model. |
|
|
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
|