Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF

This model was converted to GGUF format from NousResearch/DeepHermes-3-Llama-3-8B-Preview using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


DeepHermes 3 Preview is the latest version of our flagship Hermes series of LLMs by Nous Research, and one of the first models in the world to unify Reasoning (long chains of thought that improve answer accuracy) and normal LLM response modes into one model. We have also improved LLM annotation, judgement, and function calling.

DeepHermes 3 Preview is one of the first LLM models to unify both "intuitive", traditional mode responses and long chain of thought reasoning responses into a single model, toggled by a system prompt.

Hermes 3, the predecessor of DeepHermes 3, is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.

The ethos of the Hermes series of models is focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.

This is a preview Hermes with early reasoning capabilities, distilled from R1 across a variety of tasks that benefit from reasoning and objectivity. Some quirks may be discovered! Please let us know any interesting findings or issues you discover!

Note: To toggle REASONING ON, you must use the following system prompt:

You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside tags, and then provide your solution or response to the problem.


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -c 2048
Downloads last month
0
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF

Quantized
(17)
this model

Collections including Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF