modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-24 18:27:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 476
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-24 18:26:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tensorblock/mistral-7b-anthropic-GGUF | tensorblock | 2025-04-21T00:40:18Z | 73 | 0 | null | [
"gguf",
"alignment-handbook",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"dataset:HuggingFaceH4/ultrafeedback_binarized_fixed",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"base_model:HuggingFaceH4/mistral-7b-anthropic",
"base_model:quantized:HuggingFaceH4/mistral-7b-anthropic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-27T06:52:07Z | ---
license: apache-2.0
base_model: HuggingFaceH4/mistral-7b-anthropic
tags:
- alignment-handbook
- generated_from_trainer
- TensorBlock
- GGUF
datasets:
- HuggingFaceH4/ultrafeedback_binarized_fixed
- HuggingFaceH4/cai-conversation-harmless
model-index:
- name: mistral-7b-dpo-v21.0cai.0.2
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## HuggingFaceH4/mistral-7b-anthropic - GGUF
This repo contains GGUF format model files for [HuggingFaceH4/mistral-7b-anthropic](https://huggingface.co/HuggingFaceH4/mistral-7b-anthropic).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mistral-7b-anthropic-Q2_K.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-anthropic-Q3_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [mistral-7b-anthropic-Q3_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [mistral-7b-anthropic-Q3_K_L.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [mistral-7b-anthropic-Q4_0.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-anthropic-Q4_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [mistral-7b-anthropic-Q4_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [mistral-7b-anthropic-Q5_0.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-anthropic-Q5_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [mistral-7b-anthropic-Q5_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [mistral-7b-anthropic-Q6_K.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [mistral-7b-anthropic-Q8_0.gguf](https://huggingface.co/tensorblock/mistral-7b-anthropic-GGUF/blob/main/mistral-7b-anthropic-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mistral-7b-anthropic-GGUF --include "mistral-7b-anthropic-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mistral-7b-anthropic-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF | tensorblock | 2025-04-21T00:40:16Z | 68 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"dataset:databricks/databricks-dolly-15k",
"base_model:Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K",
"base_model:quantized:Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-27T06:47:39Z | ---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K - GGUF
This repo contains GGUF format model files for [Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K](https://huggingface.co/Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q2_K.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q3_K_S.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q3_K_M.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q3_K_L.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q4_0.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q4_K_S.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q4_K_M.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q5_0.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q5_K_S.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q5_K_M.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q6_K.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q8_0.gguf](https://huggingface.co/tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF/blob/main/Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF --include "Instruct_Mixtral-8x7B-v0.1_Dolly15K-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Instruct_Mixtral-8x7B-v0.1_Dolly15K-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF | tensorblock | 2025-04-21T00:40:15Z | 25 | 0 | null | [
"gguf",
"alignment-handbook",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:kykim0/Llama-2-7b-ultrachat200k-2e",
"base_model:quantized:kykim0/Llama-2-7b-ultrachat200k-2e",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-27T06:14:17Z | ---
base_model: kykim0/Llama-2-7b-ultrachat200k-2e
tags:
- alignment-handbook
- generated_from_trainer
- TensorBlock
- GGUF
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: Llama-2-7b-hf-sft-full-2e
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kykim0/Llama-2-7b-ultrachat200k-2e - GGUF
This repo contains GGUF format model files for [kykim0/Llama-2-7b-ultrachat200k-2e](https://huggingface.co/kykim0/Llama-2-7b-ultrachat200k-2e).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-2-7b-ultrachat200k-2e-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-2-7b-ultrachat200k-2e-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Llama-2-7b-ultrachat200k-2e-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Llama-2-7b-ultrachat200k-2e-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Llama-2-7b-ultrachat200k-2e-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-2-7b-ultrachat200k-2e-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Llama-2-7b-ultrachat200k-2e-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Llama-2-7b-ultrachat200k-2e-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-2-7b-ultrachat200k-2e-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Llama-2-7b-ultrachat200k-2e-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Llama-2-7b-ultrachat200k-2e-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Llama-2-7b-ultrachat200k-2e-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF/blob/main/Llama-2-7b-ultrachat200k-2e-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF --include "Llama-2-7b-ultrachat200k-2e-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Llama-2-7b-ultrachat200k-2e-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MetaModel-GGUF | tensorblock | 2025-04-21T00:40:14Z | 23 | 0 | null | [
"gguf",
"merge",
"mergekit",
"TensorBlock",
"GGUF",
"base_model:gagan3012/MetaModel",
"base_model:quantized:gagan3012/MetaModel",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-27T05:30:53Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- TensorBlock
- GGUF
base_model: gagan3012/MetaModel
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## gagan3012/MetaModel - GGUF
This repo contains GGUF format model files for [gagan3012/MetaModel](https://huggingface.co/gagan3012/MetaModel).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MetaModel-Q2_K.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [MetaModel-Q3_K_S.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [MetaModel-Q3_K_M.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [MetaModel-Q3_K_L.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [MetaModel-Q4_0.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MetaModel-Q4_K_S.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [MetaModel-Q4_K_M.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [MetaModel-Q5_0.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MetaModel-Q5_K_S.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [MetaModel-Q5_K_M.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [MetaModel-Q6_K.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [MetaModel-Q8_0.gguf](https://huggingface.co/tensorblock/MetaModel-GGUF/blob/main/MetaModel-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MetaModel-GGUF --include "MetaModel-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MetaModel-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Orion-14B-Base-GGUF | tensorblock | 2025-04-21T00:40:12Z | 30 | 0 | null | [
"gguf",
"code",
"model",
"llm",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"zh",
"ja",
"ko",
"base_model:OrionStarAI/Orion-14B-Base",
"base_model:quantized:OrionStarAI/Orion-14B-Base",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-27T05:00:42Z | ---
language:
- en
- zh
- ja
- ko
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
- model
- llm
- TensorBlock
- GGUF
base_model: OrionStarAI/Orion-14B-Base
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## OrionStarAI/Orion-14B-Base - GGUF
This repo contains GGUF format model files for [OrionStarAI/Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Orion-14B-Base-Q2_K.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q2_K.gguf) | Q2_K | 5.508 GB | smallest, significant quality loss - not recommended for most purposes |
| [Orion-14B-Base-Q3_K_S.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q3_K_S.gguf) | Q3_K_S | 6.404 GB | very small, high quality loss |
| [Orion-14B-Base-Q3_K_M.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q3_K_M.gguf) | Q3_K_M | 7.127 GB | very small, high quality loss |
| [Orion-14B-Base-Q3_K_L.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q3_K_L.gguf) | Q3_K_L | 7.756 GB | small, substantial quality loss |
| [Orion-14B-Base-Q4_0.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q4_0.gguf) | Q4_0 | 8.272 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Orion-14B-Base-Q4_K_S.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q4_K_S.gguf) | Q4_K_S | 8.334 GB | small, greater quality loss |
| [Orion-14B-Base-Q4_K_M.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q4_K_M.gguf) | Q4_K_M | 8.813 GB | medium, balanced quality - recommended |
| [Orion-14B-Base-Q5_0.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q5_0.gguf) | Q5_0 | 10.030 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Orion-14B-Base-Q5_K_S.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q5_K_S.gguf) | Q5_K_S | 10.030 GB | large, low quality loss - recommended |
| [Orion-14B-Base-Q5_K_M.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q5_K_M.gguf) | Q5_K_M | 10.309 GB | large, very low quality loss - recommended |
| [Orion-14B-Base-Q6_K.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q6_K.gguf) | Q6_K | 11.898 GB | very large, extremely low quality loss |
| [Orion-14B-Base-Q8_0.gguf](https://huggingface.co/tensorblock/Orion-14B-Base-GGUF/blob/main/Orion-14B-Base-Q8_0.gguf) | Q8_0 | 15.409 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Orion-14B-Base-GGUF --include "Orion-14B-Base-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Orion-14B-Base-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF | tensorblock | 2025-04-21T00:40:10Z | 27 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"ms",
"base_model:mesolitica/malaysian-mistral-7b-32k-instructions-v4",
"base_model:quantized:mesolitica/malaysian-mistral-7b-32k-instructions-v4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-27T04:11:40Z | ---
language:
- ms
base_model: mesolitica/malaysian-mistral-7b-32k-instructions-v4
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mesolitica/malaysian-mistral-7b-32k-instructions-v4 - GGUF
This repo contains GGUF format model files for [mesolitica/malaysian-mistral-7b-32k-instructions-v4](https://huggingface.co/mesolitica/malaysian-mistral-7b-32k-instructions-v4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [malaysian-mistral-7b-32k-instructions-v4-Q2_K.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [malaysian-mistral-7b-32k-instructions-v4-Q3_K_S.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [malaysian-mistral-7b-32k-instructions-v4-Q3_K_M.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [malaysian-mistral-7b-32k-instructions-v4-Q3_K_L.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [malaysian-mistral-7b-32k-instructions-v4-Q4_0.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [malaysian-mistral-7b-32k-instructions-v4-Q4_K_S.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [malaysian-mistral-7b-32k-instructions-v4-Q4_K_M.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [malaysian-mistral-7b-32k-instructions-v4-Q5_0.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [malaysian-mistral-7b-32k-instructions-v4-Q5_K_S.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [malaysian-mistral-7b-32k-instructions-v4-Q5_K_M.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [malaysian-mistral-7b-32k-instructions-v4-Q6_K.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [malaysian-mistral-7b-32k-instructions-v4-Q8_0.gguf](https://huggingface.co/tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF/blob/main/malaysian-mistral-7b-32k-instructions-v4-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF --include "malaysian-mistral-7b-32k-instructions-v4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/malaysian-mistral-7b-32k-instructions-v4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/stealth-v1.3-GGUF | tensorblock | 2025-04-21T00:40:08Z | 57 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:jan-hq/stealth-v1.3",
"base_model:quantized:jan-hq/stealth-v1.3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-27T04:07:59Z | ---
language:
- en
license: apache-2.0
base_model: jan-hq/stealth-v1.3
tags:
- TensorBlock
- GGUF
model-index:
- name: stealth-v1.3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.71
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.3
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jan-hq/stealth-v1.3 - GGUF
This repo contains GGUF format model files for [jan-hq/stealth-v1.3](https://huggingface.co/jan-hq/stealth-v1.3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [stealth-v1.3-Q2_K.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [stealth-v1.3-Q3_K_S.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [stealth-v1.3-Q3_K_M.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [stealth-v1.3-Q3_K_L.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [stealth-v1.3-Q4_0.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stealth-v1.3-Q4_K_S.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [stealth-v1.3-Q4_K_M.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [stealth-v1.3-Q5_0.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stealth-v1.3-Q5_K_S.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [stealth-v1.3-Q5_K_M.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [stealth-v1.3-Q6_K.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [stealth-v1.3-Q8_0.gguf](https://huggingface.co/tensorblock/stealth-v1.3-GGUF/blob/main/stealth-v1.3-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/stealth-v1.3-GGUF --include "stealth-v1.3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/stealth-v1.3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF | tensorblock | 2025-04-21T00:40:06Z | 40 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"ml",
"base_model:abhinand/malayalam-llama-7b-instruct-v0.1",
"base_model:quantized:abhinand/malayalam-llama-7b-instruct-v0.1",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-27T03:22:31Z | ---
language:
- en
- ml
license: llama2
base_model: abhinand/malayalam-llama-7b-instruct-v0.1
tags:
- TensorBlock
- GGUF
model-index:
- name: malayalam-llama-instruct-v0.1
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## abhinand/malayalam-llama-7b-instruct-v0.1 - GGUF
This repo contains GGUF format model files for [abhinand/malayalam-llama-7b-instruct-v0.1](https://huggingface.co/abhinand/malayalam-llama-7b-instruct-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [malayalam-llama-7b-instruct-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q2_K.gguf) | Q2_K | 2.610 GB | smallest, significant quality loss - not recommended for most purposes |
| [malayalam-llama-7b-instruct-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.032 GB | very small, high quality loss |
| [malayalam-llama-7b-instruct-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q3_K_M.gguf) | Q3_K_M | 3.382 GB | very small, high quality loss |
| [malayalam-llama-7b-instruct-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q3_K_L.gguf) | Q3_K_L | 3.681 GB | small, substantial quality loss |
| [malayalam-llama-7b-instruct-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q4_0.gguf) | Q4_0 | 3.919 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [malayalam-llama-7b-instruct-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q4_K_S.gguf) | Q4_K_S | 3.950 GB | small, greater quality loss |
| [malayalam-llama-7b-instruct-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.174 GB | medium, balanced quality - recommended |
| [malayalam-llama-7b-instruct-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q5_0.gguf) | Q5_0 | 4.753 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [malayalam-llama-7b-instruct-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q5_K_S.gguf) | Q5_K_S | 4.753 GB | large, low quality loss - recommended |
| [malayalam-llama-7b-instruct-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q5_K_M.gguf) | Q5_K_M | 4.884 GB | large, very low quality loss - recommended |
| [malayalam-llama-7b-instruct-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q6_K.gguf) | Q6_K | 5.639 GB | very large, extremely low quality loss |
| [malayalam-llama-7b-instruct-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1-Q8_0.gguf) | Q8_0 | 7.303 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF --include "malayalam-llama-7b-instruct-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/malayalam-llama-7b-instruct-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF | tensorblock | 2025-04-21T00:40:05Z | 87 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"base_model:AIdenU/Mistral-7b-ko-Y24-DPO_v0.1",
"base_model:quantized:AIdenU/Mistral-7b-ko-Y24-DPO_v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-27T02:46:42Z | ---
language:
- ko
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: AIdenU/Mistral-7b-ko-Y24-DPO_v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## AIdenU/Mistral-7b-ko-Y24-DPO_v0.1 - GGUF
This repo contains GGUF format model files for [AIdenU/Mistral-7b-ko-Y24-DPO_v0.1](https://huggingface.co/AIdenU/Mistral-7b-ko-Y24-DPO_v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7b-ko-Y24-DPO_v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24-DPO_v0.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF --include "Mistral-7b-ko-Y24-DPO_v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-7b-ko-Y24-DPO_v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/nontoxic-bagel-34b-v0.2-GGUF | tensorblock | 2025-04-21T00:40:01Z | 39 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"base_model:jondurbin/nontoxic-bagel-34b-v0.2",
"base_model:quantized:jondurbin/nontoxic-bagel-34b-v0.2",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-27T01:20:54Z | ---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
tags:
- TensorBlock
- GGUF
base_model: jondurbin/nontoxic-bagel-34b-v0.2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jondurbin/nontoxic-bagel-34b-v0.2 - GGUF
This repo contains GGUF format model files for [jondurbin/nontoxic-bagel-34b-v0.2](https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [nontoxic-bagel-34b-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes |
| [nontoxic-bagel-34b-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss |
| [nontoxic-bagel-34b-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss |
| [nontoxic-bagel-34b-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss |
| [nontoxic-bagel-34b-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [nontoxic-bagel-34b-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss |
| [nontoxic-bagel-34b-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended |
| [nontoxic-bagel-34b-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [nontoxic-bagel-34b-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended |
| [nontoxic-bagel-34b-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended |
| [nontoxic-bagel-34b-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss |
| [nontoxic-bagel-34b-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/nontoxic-bagel-34b-v0.2-GGUF/blob/main/nontoxic-bagel-34b-v0.2-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/nontoxic-bagel-34b-v0.2-GGUF --include "nontoxic-bagel-34b-v0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/nontoxic-bagel-34b-v0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Sina-Loki-7b-Merge-GGUF | tensorblock | 2025-04-21T00:39:58Z | 137 | 0 | null | [
"gguf",
"mistral",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:Azazelle/Sina-Loki-7b-Merge",
"base_model:quantized:Azazelle/Sina-Loki-7b-Merge",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-27T00:16:08Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
- TensorBlock
- GGUF
license: cc-by-4.0
base_model: Azazelle/Sina-Loki-7b-Merge
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Azazelle/Sina-Loki-7b-Merge - GGUF
This repo contains GGUF format model files for [Azazelle/Sina-Loki-7b-Merge](https://huggingface.co/Azazelle/Sina-Loki-7b-Merge).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Sina-Loki-7b-Merge-Q2_K.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Sina-Loki-7b-Merge-Q3_K_S.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Sina-Loki-7b-Merge-Q3_K_M.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Sina-Loki-7b-Merge-Q3_K_L.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Sina-Loki-7b-Merge-Q4_0.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Sina-Loki-7b-Merge-Q4_K_S.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Sina-Loki-7b-Merge-Q4_K_M.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Sina-Loki-7b-Merge-Q5_0.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Sina-Loki-7b-Merge-Q5_K_S.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Sina-Loki-7b-Merge-Q5_K_M.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Sina-Loki-7b-Merge-Q6_K.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Sina-Loki-7b-Merge-Q8_0.gguf](https://huggingface.co/tensorblock/Sina-Loki-7b-Merge-GGUF/blob/main/Sina-Loki-7b-Merge-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Sina-Loki-7b-Merge-GGUF --include "Sina-Loki-7b-Merge-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Sina-Loki-7b-Merge-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF | tensorblock | 2025-04-21T00:39:56Z | 163 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:ewqr2130/7B_ppo_phiRM_2GPU_3e-7step_4000",
"base_model:quantized:ewqr2130/7B_ppo_phiRM_2GPU_3e-7step_4000",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-26T23:41:52Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: ewqr2130/7B_ppo_phiRM_2GPU_3e-7step_4000
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## ewqr2130/7B_ppo_phiRM_2GPU_3e-7step_4000 - GGUF
This repo contains GGUF format model files for [ewqr2130/7B_ppo_phiRM_2GPU_3e-7step_4000](https://huggingface.co/ewqr2130/7B_ppo_phiRM_2GPU_3e-7step_4000).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q2_K.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q3_K_S.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q3_K_M.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q3_K_L.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q4_0.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q4_K_S.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q4_K_M.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q5_0.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q5_K_S.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q5_K_M.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q6_K.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [7B_ppo_phiRM_2GPU_3e-7step_4000-Q8_0.gguf](https://huggingface.co/tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF/blob/main/7B_ppo_phiRM_2GPU_3e-7step_4000-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF --include "7B_ppo_phiRM_2GPU_3e-7step_4000-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/7B_ppo_phiRM_2GPU_3e-7step_4000-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SOLAR-Platypus-10.7B-v2-GGUF | tensorblock | 2025-04-21T00:39:55Z | 196 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"base_model:kyujinpy/SOLAR-Platypus-10.7B-v2",
"base_model:quantized:kyujinpy/SOLAR-Platypus-10.7B-v2",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-26T22:22:41Z | ---
language:
- en
datasets:
- garage-bAInd/Open-Platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
tags:
- TensorBlock
- GGUF
base_model: kyujinpy/SOLAR-Platypus-10.7B-v2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kyujinpy/SOLAR-Platypus-10.7B-v2 - GGUF
This repo contains GGUF format model files for [kyujinpy/SOLAR-Platypus-10.7B-v2](https://huggingface.co/kyujinpy/SOLAR-Platypus-10.7B-v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SOLAR-Platypus-10.7B-v2-Q2_K.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [SOLAR-Platypus-10.7B-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [SOLAR-Platypus-10.7B-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [SOLAR-Platypus-10.7B-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [SOLAR-Platypus-10.7B-v2-Q4_0.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SOLAR-Platypus-10.7B-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [SOLAR-Platypus-10.7B-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [SOLAR-Platypus-10.7B-v2-Q5_0.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SOLAR-Platypus-10.7B-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [SOLAR-Platypus-10.7B-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [SOLAR-Platypus-10.7B-v2-Q6_K.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [SOLAR-Platypus-10.7B-v2-Q8_0.gguf](https://huggingface.co/tensorblock/SOLAR-Platypus-10.7B-v2-GGUF/blob/main/SOLAR-Platypus-10.7B-v2-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SOLAR-Platypus-10.7B-v2-GGUF --include "SOLAR-Platypus-10.7B-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SOLAR-Platypus-10.7B-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MoE-Merging-GGUF | tensorblock | 2025-04-21T00:39:53Z | 166 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Cartinoe5930/MoE-Merging",
"base_model:quantized:Cartinoe5930/MoE-Merging",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-26T22:05:23Z | ---
license: apache-2.0
base_model: Cartinoe5930/MoE-Merging
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Cartinoe5930/MoE-Merging - GGUF
This repo contains GGUF format model files for [Cartinoe5930/MoE-Merging](https://huggingface.co/Cartinoe5930/MoE-Merging).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MoE-Merging-Q2_K.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q2_K.gguf) | Q2_K | 8.843 GB | smallest, significant quality loss - not recommended for most purposes |
| [MoE-Merging-Q3_K_S.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q3_K_S.gguf) | Q3_K_S | 10.433 GB | very small, high quality loss |
| [MoE-Merging-Q3_K_M.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q3_K_M.gguf) | Q3_K_M | 11.580 GB | very small, high quality loss |
| [MoE-Merging-Q3_K_L.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q3_K_L.gguf) | Q3_K_L | 12.544 GB | small, substantial quality loss |
| [MoE-Merging-Q4_0.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q4_0.gguf) | Q4_0 | 13.624 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MoE-Merging-Q4_K_S.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q4_K_S.gguf) | Q4_K_S | 13.743 GB | small, greater quality loss |
| [MoE-Merging-Q4_K_M.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q4_K_M.gguf) | Q4_K_M | 14.610 GB | medium, balanced quality - recommended |
| [MoE-Merging-Q5_0.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q5_0.gguf) | Q5_0 | 16.626 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MoE-Merging-Q5_K_S.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q5_K_S.gguf) | Q5_K_S | 16.626 GB | large, low quality loss - recommended |
| [MoE-Merging-Q5_K_M.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q5_K_M.gguf) | Q5_K_M | 17.134 GB | large, very low quality loss - recommended |
| [MoE-Merging-Q6_K.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q6_K.gguf) | Q6_K | 19.817 GB | very large, extremely low quality loss |
| [MoE-Merging-Q8_0.gguf](https://huggingface.co/tensorblock/MoE-Merging-GGUF/blob/main/MoE-Merging-Q8_0.gguf) | Q8_0 | 25.666 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MoE-Merging-GGUF --include "MoE-Merging-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MoE-Merging-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TurdusDareBeagle-7B-GGUF | tensorblock | 2025-04-21T00:39:51Z | 225 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"udkai/Turdus",
"shadowml/DareBeagle-7B",
"TensorBlock",
"GGUF",
"base_model:leveldevai/TurdusDareBeagle-7B",
"base_model:quantized:leveldevai/TurdusDareBeagle-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-26T21:43:54Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- udkai/Turdus
- shadowml/DareBeagle-7B
- TensorBlock
- GGUF
base_model: leveldevai/TurdusDareBeagle-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## leveldevai/TurdusDareBeagle-7B - GGUF
This repo contains GGUF format model files for [leveldevai/TurdusDareBeagle-7B](https://huggingface.co/leveldevai/TurdusDareBeagle-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TurdusDareBeagle-7B-Q2_K.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [TurdusDareBeagle-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [TurdusDareBeagle-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [TurdusDareBeagle-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [TurdusDareBeagle-7B-Q4_0.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TurdusDareBeagle-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [TurdusDareBeagle-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [TurdusDareBeagle-7B-Q5_0.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TurdusDareBeagle-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [TurdusDareBeagle-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [TurdusDareBeagle-7B-Q6_K.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [TurdusDareBeagle-7B-Q8_0.gguf](https://huggingface.co/tensorblock/TurdusDareBeagle-7B-GGUF/blob/main/TurdusDareBeagle-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TurdusDareBeagle-7B-GGUF --include "TurdusDareBeagle-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TurdusDareBeagle-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/llama2-13b-sft-dpo-GGUF | tensorblock | 2025-04-21T00:39:44Z | 215 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:etri-xainlp/llama2-13b-sft-dpo",
"base_model:quantized:etri-xainlp/llama2-13b-sft-dpo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-26T20:35:44Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: etri-xainlp/llama2-13b-sft-dpo
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## etri-xainlp/llama2-13b-sft-dpo - GGUF
This repo contains GGUF format model files for [etri-xainlp/llama2-13b-sft-dpo](https://huggingface.co/etri-xainlp/llama2-13b-sft-dpo).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama2-13b-sft-dpo-Q2_K.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2-13b-sft-dpo-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [llama2-13b-sft-dpo-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [llama2-13b-sft-dpo-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [llama2-13b-sft-dpo-Q4_0.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2-13b-sft-dpo-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [llama2-13b-sft-dpo-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [llama2-13b-sft-dpo-Q5_0.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2-13b-sft-dpo-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [llama2-13b-sft-dpo-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [llama2-13b-sft-dpo-Q6_K.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [llama2-13b-sft-dpo-Q8_0.gguf](https://huggingface.co/tensorblock/llama2-13b-sft-dpo-GGUF/blob/main/llama2-13b-sft-dpo-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/llama2-13b-sft-dpo-GGUF --include "llama2-13b-sft-dpo-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/llama2-13b-sft-dpo-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF | tensorblock | 2025-04-21T00:39:43Z | 195 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:phanerozoic/Tiny-Cowboy-1.1b-v0.1",
"base_model:quantized:phanerozoic/Tiny-Cowboy-1.1b-v0.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-26T20:27:22Z | ---
license: cc-by-nc-4.0
language:
- en
widget:
- text: 'Howdy! What is best about the prairie, cowpoke?
'
example_title: Color of a Typical Cowboy Hat
tags:
- TensorBlock
- GGUF
base_model: phanerozoic/Tiny-Cowboy-1.1b-v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## phanerozoic/Tiny-Cowboy-1.1b-v0.1 - GGUF
This repo contains GGUF format model files for [phanerozoic/Tiny-Cowboy-1.1b-v0.1](https://huggingface.co/phanerozoic/Tiny-Cowboy-1.1b-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Tiny-Cowboy-1.1b-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q2_K.gguf) | Q2_K | 0.432 GB | smallest, significant quality loss - not recommended for most purposes |
| [Tiny-Cowboy-1.1b-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q3_K_S.gguf) | Q3_K_S | 0.499 GB | very small, high quality loss |
| [Tiny-Cowboy-1.1b-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q3_K_M.gguf) | Q3_K_M | 0.548 GB | very small, high quality loss |
| [Tiny-Cowboy-1.1b-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [Tiny-Cowboy-1.1b-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Tiny-Cowboy-1.1b-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q4_K_S.gguf) | Q4_K_S | 0.640 GB | small, greater quality loss |
| [Tiny-Cowboy-1.1b-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q4_K_M.gguf) | Q4_K_M | 0.668 GB | medium, balanced quality - recommended |
| [Tiny-Cowboy-1.1b-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q5_0.gguf) | Q5_0 | 0.766 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Tiny-Cowboy-1.1b-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q5_K_S.gguf) | Q5_K_S | 0.766 GB | large, low quality loss - recommended |
| [Tiny-Cowboy-1.1b-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q5_K_M.gguf) | Q5_K_M | 0.782 GB | large, very low quality loss - recommended |
| [Tiny-Cowboy-1.1b-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q6_K.gguf) | Q6_K | 0.903 GB | very large, extremely low quality loss |
| [Tiny-Cowboy-1.1b-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF/blob/main/Tiny-Cowboy-1.1b-v0.1-Q8_0.gguf) | Q8_0 | 1.170 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF --include "Tiny-Cowboy-1.1b-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Tiny-Cowboy-1.1b-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF | tensorblock | 2025-04-21T00:39:40Z | 22 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:tatsu-lab/alpaca",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-26T19:25:08Z | ---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
base_model: LordNoah/Alpaca_spin_tuned_gpt2_large
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## LordNoah/Alpaca_spin_tuned_gpt2_large - GGUF
This repo contains GGUF format model files for [LordNoah/Alpaca_spin_tuned_gpt2_large](https://huggingface.co/LordNoah/Alpaca_spin_tuned_gpt2_large).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Alpaca_spin_tuned_gpt2_large-Q2_K.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q2_K.gguf) | Q2_K | 0.346 GB | smallest, significant quality loss - not recommended for most purposes |
| [Alpaca_spin_tuned_gpt2_large-Q3_K_S.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q3_K_S.gguf) | Q3_K_S | 0.394 GB | very small, high quality loss |
| [Alpaca_spin_tuned_gpt2_large-Q3_K_M.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q3_K_M.gguf) | Q3_K_M | 0.458 GB | very small, high quality loss |
| [Alpaca_spin_tuned_gpt2_large-Q3_K_L.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q3_K_L.gguf) | Q3_K_L | 0.494 GB | small, substantial quality loss |
| [Alpaca_spin_tuned_gpt2_large-Q4_0.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q4_0.gguf) | Q4_0 | 0.497 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Alpaca_spin_tuned_gpt2_large-Q4_K_S.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q4_K_S.gguf) | Q4_K_S | 0.500 GB | small, greater quality loss |
| [Alpaca_spin_tuned_gpt2_large-Q4_K_M.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q4_K_M.gguf) | Q4_K_M | 0.549 GB | medium, balanced quality - recommended |
| [Alpaca_spin_tuned_gpt2_large-Q5_0.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q5_0.gguf) | Q5_0 | 0.593 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Alpaca_spin_tuned_gpt2_large-Q5_K_S.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q5_K_S.gguf) | Q5_K_S | 0.593 GB | large, low quality loss - recommended |
| [Alpaca_spin_tuned_gpt2_large-Q5_K_M.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q5_K_M.gguf) | Q5_K_M | 0.632 GB | large, very low quality loss - recommended |
| [Alpaca_spin_tuned_gpt2_large-Q6_K.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q6_K.gguf) | Q6_K | 0.696 GB | very large, extremely low quality loss |
| [Alpaca_spin_tuned_gpt2_large-Q8_0.gguf](https://huggingface.co/tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF/blob/main/Alpaca_spin_tuned_gpt2_large-Q8_0.gguf) | Q8_0 | 0.898 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF --include "Alpaca_spin_tuned_gpt2_large-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Alpaca_spin_tuned_gpt2_large-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/pythia_70m_sft-GGUF | tensorblock | 2025-04-21T00:39:37Z | 137 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:tatsu-lab/alpaca_farm",
"base_model:tlc4418/pythia_70m_sft",
"base_model:quantized:tlc4418/pythia_70m_sft",
"endpoints_compatible",
"region:us"
] | null | 2024-12-26T19:15:11Z | ---
datasets:
- tatsu-lab/alpaca_farm
base_model: tlc4418/pythia_70m_sft
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## tlc4418/pythia_70m_sft - GGUF
This repo contains GGUF format model files for [tlc4418/pythia_70m_sft](https://huggingface.co/tlc4418/pythia_70m_sft).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [pythia_70m_sft-Q2_K.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q2_K.gguf) | Q2_K | 0.038 GB | smallest, significant quality loss - not recommended for most purposes |
| [pythia_70m_sft-Q3_K_S.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q3_K_S.gguf) | Q3_K_S | 0.042 GB | very small, high quality loss |
| [pythia_70m_sft-Q3_K_M.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q3_K_M.gguf) | Q3_K_M | 0.044 GB | very small, high quality loss |
| [pythia_70m_sft-Q3_K_L.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q3_K_L.gguf) | Q3_K_L | 0.045 GB | small, substantial quality loss |
| [pythia_70m_sft-Q4_0.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q4_0.gguf) | Q4_0 | 0.048 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [pythia_70m_sft-Q4_K_S.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q4_K_S.gguf) | Q4_K_S | 0.048 GB | small, greater quality loss |
| [pythia_70m_sft-Q4_K_M.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q4_K_M.gguf) | Q4_K_M | 0.049 GB | medium, balanced quality - recommended |
| [pythia_70m_sft-Q5_0.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q5_0.gguf) | Q5_0 | 0.054 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [pythia_70m_sft-Q5_K_S.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q5_K_S.gguf) | Q5_K_S | 0.054 GB | large, low quality loss - recommended |
| [pythia_70m_sft-Q5_K_M.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q5_K_M.gguf) | Q5_K_M | 0.055 GB | large, very low quality loss - recommended |
| [pythia_70m_sft-Q6_K.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q6_K.gguf) | Q6_K | 0.060 GB | very large, extremely low quality loss |
| [pythia_70m_sft-Q8_0.gguf](https://huggingface.co/tensorblock/pythia_70m_sft-GGUF/blob/main/pythia_70m_sft-Q8_0.gguf) | Q8_0 | 0.077 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/pythia_70m_sft-GGUF --include "pythia_70m_sft-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/pythia_70m_sft-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Voldemort-10B-DPO-GGUF | tensorblock | 2025-04-21T00:39:33Z | 136 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:PetroGPT/Voldemort-10B-DPO",
"base_model:quantized:PetroGPT/Voldemort-10B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-26T17:47:36Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: PetroGPT/Voldemort-10B-DPO
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## PetroGPT/Voldemort-10B-DPO - GGUF
This repo contains GGUF format model files for [PetroGPT/Voldemort-10B-DPO](https://huggingface.co/PetroGPT/Voldemort-10B-DPO).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Voldemort-10B-DPO-Q2_K.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [Voldemort-10B-DPO-Q3_K_S.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [Voldemort-10B-DPO-Q3_K_M.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [Voldemort-10B-DPO-Q3_K_L.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [Voldemort-10B-DPO-Q4_0.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Voldemort-10B-DPO-Q4_K_S.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [Voldemort-10B-DPO-Q4_K_M.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [Voldemort-10B-DPO-Q5_0.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Voldemort-10B-DPO-Q5_K_S.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [Voldemort-10B-DPO-Q5_K_M.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [Voldemort-10B-DPO-Q6_K.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [Voldemort-10B-DPO-Q8_0.gguf](https://huggingface.co/tensorblock/Voldemort-10B-DPO-GGUF/blob/main/Voldemort-10B-DPO-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Voldemort-10B-DPO-GGUF --include "Voldemort-10B-DPO-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Voldemort-10B-DPO-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/flux-7b-v0.2-GGUF | tensorblock | 2025-04-21T00:39:27Z | 46 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:chanwit/flux-7b-v0.2",
"base_model:quantized:chanwit/flux-7b-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-26T15:57:13Z | ---
license: apache-2.0
language:
- en
base_model: chanwit/flux-7b-v0.2
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## chanwit/flux-7b-v0.2 - GGUF
This repo contains GGUF format model files for [chanwit/flux-7b-v0.2](https://huggingface.co/chanwit/flux-7b-v0.2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [flux-7b-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [flux-7b-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [flux-7b-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [flux-7b-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [flux-7b-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [flux-7b-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [flux-7b-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [flux-7b-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [flux-7b-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [flux-7b-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [flux-7b-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [flux-7b-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/flux-7b-v0.2-GGUF/blob/main/flux-7b-v0.2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/flux-7b-v0.2-GGUF --include "flux-7b-v0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/flux-7b-v0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Bagel-Hermes-2x34B-GGUF | tensorblock | 2025-04-21T00:39:22Z | 59 | 0 | null | [
"gguf",
"yi",
"moe",
"TensorBlock",
"GGUF",
"base_model:Weyaxi/Bagel-Hermes-2x34B",
"base_model:quantized:Weyaxi/Bagel-Hermes-2x34B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-26T10:17:44Z | ---
tags:
- yi
- moe
- TensorBlock
- GGUF
license: apache-2.0
base_model: Weyaxi/Bagel-Hermes-2x34B
model-index:
- name: Bagel-Hermes-2x34b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.82
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Weyaxi/Bagel-Hermes-2x34B - GGUF
This repo contains GGUF format model files for [Weyaxi/Bagel-Hermes-2x34B](https://huggingface.co/Weyaxi/Bagel-Hermes-2x34B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Bagel-Hermes-2x34B-Q2_K.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q2_K.gguf) | Q2_K | 22.394 GB | smallest, significant quality loss - not recommended for most purposes |
| [Bagel-Hermes-2x34B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q3_K_S.gguf) | Q3_K_S | 26.318 GB | very small, high quality loss |
| [Bagel-Hermes-2x34B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q3_K_M.gguf) | Q3_K_M | 29.237 GB | very small, high quality loss |
| [Bagel-Hermes-2x34B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q3_K_L.gguf) | Q3_K_L | 31.768 GB | small, substantial quality loss |
| [Bagel-Hermes-2x34B-Q4_0.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q4_0.gguf) | Q4_0 | 34.334 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Bagel-Hermes-2x34B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q4_K_S.gguf) | Q4_K_S | 34.594 GB | small, greater quality loss |
| [Bagel-Hermes-2x34B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q4_K_M.gguf) | Q4_K_M | 36.661 GB | medium, balanced quality - recommended |
| [Bagel-Hermes-2x34B-Q5_0.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q5_0.gguf) | Q5_0 | 41.878 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Bagel-Hermes-2x34B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q5_K_S.gguf) | Q5_K_S | 41.878 GB | large, low quality loss - recommended |
| [Bagel-Hermes-2x34B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q5_K_M.gguf) | Q5_K_M | 43.077 GB | large, very low quality loss - recommended |
| [Bagel-Hermes-2x34B-Q6_K.gguf](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q6_K.gguf) | Q6_K | 49.893 GB | very large, extremely low quality loss |
| [Bagel-Hermes-2x34B-Q8_0](https://huggingface.co/tensorblock/Bagel-Hermes-2x34B-GGUF/blob/main/Bagel-Hermes-2x34B-Q8_0) | Q8_0 | 64.621 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Bagel-Hermes-2x34B-GGUF --include "Bagel-Hermes-2x34B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Bagel-Hermes-2x34B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/7Bx4_DPO-GGUF | tensorblock | 2025-04-21T00:39:15Z | 43 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:yunconglong/7Bx4_DPO",
"base_model:quantized:yunconglong/7Bx4_DPO",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-12-26T06:01:38Z | ---
license: mit
base_model: yunconglong/7Bx4_DPO
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## yunconglong/7Bx4_DPO - GGUF
This repo contains GGUF format model files for [yunconglong/7Bx4_DPO](https://huggingface.co/yunconglong/7Bx4_DPO).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [7Bx4_DPO-Q2_K.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q2_K.gguf) | Q2_K | 8.843 GB | smallest, significant quality loss - not recommended for most purposes |
| [7Bx4_DPO-Q3_K_S.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q3_K_S.gguf) | Q3_K_S | 10.433 GB | very small, high quality loss |
| [7Bx4_DPO-Q3_K_M.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q3_K_M.gguf) | Q3_K_M | 11.580 GB | very small, high quality loss |
| [7Bx4_DPO-Q3_K_L.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q3_K_L.gguf) | Q3_K_L | 12.544 GB | small, substantial quality loss |
| [7Bx4_DPO-Q4_0.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q4_0.gguf) | Q4_0 | 13.624 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [7Bx4_DPO-Q4_K_S.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q4_K_S.gguf) | Q4_K_S | 13.743 GB | small, greater quality loss |
| [7Bx4_DPO-Q4_K_M.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q4_K_M.gguf) | Q4_K_M | 14.610 GB | medium, balanced quality - recommended |
| [7Bx4_DPO-Q5_0.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q5_0.gguf) | Q5_0 | 16.626 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [7Bx4_DPO-Q5_K_S.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q5_K_S.gguf) | Q5_K_S | 16.626 GB | large, low quality loss - recommended |
| [7Bx4_DPO-Q5_K_M.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q5_K_M.gguf) | Q5_K_M | 17.134 GB | large, very low quality loss - recommended |
| [7Bx4_DPO-Q6_K.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q6_K.gguf) | Q6_K | 19.817 GB | very large, extremely low quality loss |
| [7Bx4_DPO-Q8_0.gguf](https://huggingface.co/tensorblock/7Bx4_DPO-GGUF/blob/main/7Bx4_DPO-Q8_0.gguf) | Q8_0 | 25.666 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/7Bx4_DPO-GGUF --include "7Bx4_DPO-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/7Bx4_DPO-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/UTENA-7B-V3-GGUF | tensorblock | 2025-04-21T00:39:14Z | 37 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"AI-B/UTENA-7B-UNA-V2",
"AI-B/UTENA-7B-NSFW-V2",
"TensorBlock",
"GGUF",
"base_model:AI-B/UTENA-7B-V3",
"base_model:quantized:AI-B/UTENA-7B-V3",
"license:unlicense",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-26T05:05:37Z | ---
license: unlicense
tags:
- merge
- mergekit
- lazymergekit
- AI-B/UTENA-7B-UNA-V2
- AI-B/UTENA-7B-NSFW-V2
- TensorBlock
- GGUF
base_model: AI-B/UTENA-7B-V3
model-index:
- name: UTENA-7B-V3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-V3
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## AI-B/UTENA-7B-V3 - GGUF
This repo contains GGUF format model files for [AI-B/UTENA-7B-V3](https://huggingface.co/AI-B/UTENA-7B-V3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [UTENA-7B-V3-Q2_K.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [UTENA-7B-V3-Q3_K_S.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [UTENA-7B-V3-Q3_K_M.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [UTENA-7B-V3-Q3_K_L.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [UTENA-7B-V3-Q4_0.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [UTENA-7B-V3-Q4_K_S.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [UTENA-7B-V3-Q4_K_M.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [UTENA-7B-V3-Q5_0.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [UTENA-7B-V3-Q5_K_S.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [UTENA-7B-V3-Q5_K_M.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [UTENA-7B-V3-Q6_K.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [UTENA-7B-V3-Q8_0.gguf](https://huggingface.co/tensorblock/UTENA-7B-V3-GGUF/blob/main/UTENA-7B-V3-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/UTENA-7B-V3-GGUF --include "UTENA-7B-V3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/UTENA-7B-V3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Distilled-HermesChat-7B-GGUF | tensorblock | 2025-04-21T00:39:10Z | 25 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"openchat/openchat-3.5-0106",
"argilla/distilabeled-Hermes-2.5-Mistral-7B",
"TensorBlock",
"GGUF",
"base_model:flemmingmiguel/Distilled-HermesChat-7B",
"base_model:quantized:flemmingmiguel/Distilled-HermesChat-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-26T04:27:10Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- openchat/openchat-3.5-0106
- argilla/distilabeled-Hermes-2.5-Mistral-7B
- TensorBlock
- GGUF
base_model: flemmingmiguel/Distilled-HermesChat-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## flemmingmiguel/Distilled-HermesChat-7B - GGUF
This repo contains GGUF format model files for [flemmingmiguel/Distilled-HermesChat-7B](https://huggingface.co/flemmingmiguel/Distilled-HermesChat-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>GPT4 Correct System: {system_prompt}<|end_of_turn|>GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Distilled-HermesChat-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Distilled-HermesChat-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Distilled-HermesChat-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Distilled-HermesChat-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Distilled-HermesChat-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Distilled-HermesChat-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Distilled-HermesChat-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Distilled-HermesChat-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Distilled-HermesChat-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Distilled-HermesChat-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Distilled-HermesChat-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Distilled-HermesChat-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Distilled-HermesChat-7B-GGUF/blob/main/Distilled-HermesChat-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Distilled-HermesChat-7B-GGUF --include "Distilled-HermesChat-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Distilled-HermesChat-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF | tensorblock | 2025-04-21T00:39:03Z | 46 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Inforup982/Harsha-Hermes-2.5-Mistral-7B_safetensors",
"base_model:quantized:Inforup982/Harsha-Hermes-2.5-Mistral-7B_safetensors",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-26T02:03:34Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: Inforup982/Harsha-Hermes-2.5-Mistral-7B_safetensors
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Inforup982/Harsha-Hermes-2.5-Mistral-7B_safetensors - GGUF
This repo contains GGUF format model files for [Inforup982/Harsha-Hermes-2.5-Mistral-7B_safetensors](https://huggingface.co/Inforup982/Harsha-Hermes-2.5-Mistral-7B_safetensors).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q2_K.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q3_K_S.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q3_K_M.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q3_K_L.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q4_0.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q4_K_S.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q4_K_M.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q5_0.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q5_K_S.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q5_K_M.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q6_K.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Harsha-Hermes-2.5-Mistral-7B_safetensors-Q8_0.gguf](https://huggingface.co/tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF/blob/main/Harsha-Hermes-2.5-Mistral-7B_safetensors-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF --include "Harsha-Hermes-2.5-Mistral-7B_safetensors-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Harsha-Hermes-2.5-Mistral-7B_safetensors-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mistral7b-bartending-recipe-v1-GGUF | tensorblock | 2025-04-21T00:38:56Z | 45 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:StatPan/mistral7b-bartending-recipe-v1",
"base_model:quantized:StatPan/mistral7b-bartending-recipe-v1",
"endpoints_compatible",
"region:us"
] | null | 2024-12-25T22:23:15Z | ---
base_model: StatPan/mistral7b-bartending-recipe-v1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## StatPan/mistral7b-bartending-recipe-v1 - GGUF
This repo contains GGUF format model files for [StatPan/mistral7b-bartending-recipe-v1](https://huggingface.co/StatPan/mistral7b-bartending-recipe-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mistral7b-bartending-recipe-v1-Q2_K.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral7b-bartending-recipe-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [mistral7b-bartending-recipe-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [mistral7b-bartending-recipe-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [mistral7b-bartending-recipe-v1-Q4_0.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral7b-bartending-recipe-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [mistral7b-bartending-recipe-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [mistral7b-bartending-recipe-v1-Q5_0.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral7b-bartending-recipe-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [mistral7b-bartending-recipe-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [mistral7b-bartending-recipe-v1-Q6_K.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [mistral7b-bartending-recipe-v1-Q8_0.gguf](https://huggingface.co/tensorblock/mistral7b-bartending-recipe-v1-GGUF/blob/main/mistral7b-bartending-recipe-v1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mistral7b-bartending-recipe-v1-GGUF --include "mistral7b-bartending-recipe-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mistral7b-bartending-recipe-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/vicuna-class-tutor-13b-ep3-GGUF | tensorblock | 2025-04-21T00:38:53Z | 85 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:luffycodes/vicuna-class-tutor-13b-ep3",
"base_model:quantized:luffycodes/vicuna-class-tutor-13b-ep3",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-25T21:21:03Z | ---
license: llama2
base_model: luffycodes/vicuna-class-tutor-13b-ep3
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## luffycodes/vicuna-class-tutor-13b-ep3 - GGUF
This repo contains GGUF format model files for [luffycodes/vicuna-class-tutor-13b-ep3](https://huggingface.co/luffycodes/vicuna-class-tutor-13b-ep3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [vicuna-class-tutor-13b-ep3-Q2_K.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [vicuna-class-tutor-13b-ep3-Q3_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [vicuna-class-tutor-13b-ep3-Q3_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [vicuna-class-tutor-13b-ep3-Q3_K_L.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [vicuna-class-tutor-13b-ep3-Q4_0.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vicuna-class-tutor-13b-ep3-Q4_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [vicuna-class-tutor-13b-ep3-Q4_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [vicuna-class-tutor-13b-ep3-Q5_0.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vicuna-class-tutor-13b-ep3-Q5_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [vicuna-class-tutor-13b-ep3-Q5_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [vicuna-class-tutor-13b-ep3-Q6_K.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [vicuna-class-tutor-13b-ep3-Q8_0.gguf](https://huggingface.co/tensorblock/vicuna-class-tutor-13b-ep3-GGUF/blob/main/vicuna-class-tutor-13b-ep3-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/vicuna-class-tutor-13b-ep3-GGUF --include "vicuna-class-tutor-13b-ep3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/vicuna-class-tutor-13b-ep3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CarbonVillain-en-10.7B-v3-GGUF | tensorblock | 2025-04-21T00:38:52Z | 75 | 0 | null | [
"gguf",
"merge",
"slerp",
"TensorBlock",
"GGUF",
"en",
"base_model:jeonsworld/CarbonVillain-en-10.7B-v3",
"base_model:quantized:jeonsworld/CarbonVillain-en-10.7B-v3",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-25T20:45:20Z | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- merge
- slerp
- TensorBlock
- GGUF
base_model: jeonsworld/CarbonVillain-en-10.7B-v3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jeonsworld/CarbonVillain-en-10.7B-v3 - GGUF
This repo contains GGUF format model files for [jeonsworld/CarbonVillain-en-10.7B-v3](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CarbonVillain-en-10.7B-v3-Q2_K.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [CarbonVillain-en-10.7B-v3-Q3_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [CarbonVillain-en-10.7B-v3-Q3_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [CarbonVillain-en-10.7B-v3-Q3_K_L.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [CarbonVillain-en-10.7B-v3-Q4_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [CarbonVillain-en-10.7B-v3-Q4_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [CarbonVillain-en-10.7B-v3-Q4_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [CarbonVillain-en-10.7B-v3-Q5_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [CarbonVillain-en-10.7B-v3-Q5_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [CarbonVillain-en-10.7B-v3-Q5_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [CarbonVillain-en-10.7B-v3-Q6_K.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [CarbonVillain-en-10.7B-v3-Q8_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v3-GGUF/blob/main/CarbonVillain-en-10.7B-v3-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CarbonVillain-en-10.7B-v3-GGUF --include "CarbonVillain-en-10.7B-v3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CarbonVillain-en-10.7B-v3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/reglu-15B-GGUF | tensorblock | 2025-04-21T00:38:46Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:SparseLLM/reglu-15B",
"base_model:quantized:SparseLLM/reglu-15B",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-25T19:27:22Z | ---
language:
- en
library_name: transformers
license: llama2
tags:
- TensorBlock
- GGUF
base_model: SparseLLM/reglu-15B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## SparseLLM/reglu-15B - GGUF
This repo contains GGUF format model files for [SparseLLM/reglu-15B](https://huggingface.co/SparseLLM/reglu-15B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [reglu-15B-Q2_K.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q2_K.gguf) | Q2_K | 0.516 GB | smallest, significant quality loss - not recommended for most purposes |
| [reglu-15B-Q3_K_S.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q3_K_S.gguf) | Q3_K_S | 0.597 GB | very small, high quality loss |
| [reglu-15B-Q3_K_M.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q3_K_M.gguf) | Q3_K_M | 0.661 GB | very small, high quality loss |
| [reglu-15B-Q3_K_L.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q3_K_L.gguf) | Q3_K_L | 0.717 GB | small, substantial quality loss |
| [reglu-15B-Q4_0.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q4_0.gguf) | Q4_0 | 0.764 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [reglu-15B-Q4_K_S.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q4_K_S.gguf) | Q4_K_S | 0.770 GB | small, greater quality loss |
| [reglu-15B-Q4_K_M.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q4_K_M.gguf) | Q4_K_M | 0.811 GB | medium, balanced quality - recommended |
| [reglu-15B-Q5_0.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q5_0.gguf) | Q5_0 | 0.922 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [reglu-15B-Q5_K_S.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q5_K_S.gguf) | Q5_K_S | 0.922 GB | large, low quality loss - recommended |
| [reglu-15B-Q5_K_M.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q5_K_M.gguf) | Q5_K_M | 0.946 GB | large, very low quality loss - recommended |
| [reglu-15B-Q6_K.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q6_K.gguf) | Q6_K | 1.089 GB | very large, extremely low quality loss |
| [reglu-15B-Q8_0.gguf](https://huggingface.co/tensorblock/reglu-15B-GGUF/blob/main/reglu-15B-Q8_0.gguf) | Q8_0 | 1.410 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/reglu-15B-GGUF --include "reglu-15B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/reglu-15B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF | tensorblock | 2025-04-21T00:38:45Z | 59 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:jonflynn/Mistral-7B-Instruct-v0.2-sharded2GB",
"base_model:quantized:jonflynn/Mistral-7B-Instruct-v0.2-sharded2GB",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-25T18:49:06Z | ---
base_model: jonflynn/Mistral-7B-Instruct-v0.2-sharded2GB
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jonflynn/Mistral-7B-Instruct-v0.2-sharded2GB - GGUF
This repo contains GGUF format model files for [jonflynn/Mistral-7B-Instruct-v0.2-sharded2GB](https://huggingface.co/jonflynn/Mistral-7B-Instruct-v0.2-sharded2GB).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-Instruct-v0.2-sharded2GB-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF/blob/main/Mistral-7B-Instruct-v0.2-sharded2GB-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF --include "Mistral-7B-Instruct-v0.2-sharded2GB-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-7B-Instruct-v0.2-sharded2GB-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/FrankenDPO-4x7B-bf16-GGUF | tensorblock | 2025-04-21T00:38:44Z | 47 | 0 | null | [
"gguf",
"merge",
"moe",
"TensorBlock",
"GGUF",
"en",
"base_model:Kquant03/FrankenDPO-4x7B-bf16",
"base_model:quantized:Kquant03/FrankenDPO-4x7B-bf16",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-25T18:23:59Z | ---
license: apache-2.0
language:
- en
tags:
- merge
- moe
- TensorBlock
- GGUF
base_model: Kquant03/FrankenDPO-4x7B-bf16
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Kquant03/FrankenDPO-4x7B-bf16 - GGUF
This repo contains GGUF format model files for [Kquant03/FrankenDPO-4x7B-bf16](https://huggingface.co/Kquant03/FrankenDPO-4x7B-bf16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [FrankenDPO-4x7B-bf16-Q2_K.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q2_K.gguf) | Q2_K | 8.843 GB | smallest, significant quality loss - not recommended for most purposes |
| [FrankenDPO-4x7B-bf16-Q3_K_S.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q3_K_S.gguf) | Q3_K_S | 10.433 GB | very small, high quality loss |
| [FrankenDPO-4x7B-bf16-Q3_K_M.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q3_K_M.gguf) | Q3_K_M | 11.580 GB | very small, high quality loss |
| [FrankenDPO-4x7B-bf16-Q3_K_L.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q3_K_L.gguf) | Q3_K_L | 12.544 GB | small, substantial quality loss |
| [FrankenDPO-4x7B-bf16-Q4_0.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q4_0.gguf) | Q4_0 | 13.624 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [FrankenDPO-4x7B-bf16-Q4_K_S.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q4_K_S.gguf) | Q4_K_S | 13.743 GB | small, greater quality loss |
| [FrankenDPO-4x7B-bf16-Q4_K_M.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q4_K_M.gguf) | Q4_K_M | 14.610 GB | medium, balanced quality - recommended |
| [FrankenDPO-4x7B-bf16-Q5_0.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q5_0.gguf) | Q5_0 | 16.626 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [FrankenDPO-4x7B-bf16-Q5_K_S.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q5_K_S.gguf) | Q5_K_S | 16.626 GB | large, low quality loss - recommended |
| [FrankenDPO-4x7B-bf16-Q5_K_M.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q5_K_M.gguf) | Q5_K_M | 17.134 GB | large, very low quality loss - recommended |
| [FrankenDPO-4x7B-bf16-Q6_K.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q6_K.gguf) | Q6_K | 19.817 GB | very large, extremely low quality loss |
| [FrankenDPO-4x7B-bf16-Q8_0.gguf](https://huggingface.co/tensorblock/FrankenDPO-4x7B-bf16-GGUF/blob/main/FrankenDPO-4x7B-bf16-Q8_0.gguf) | Q8_0 | 25.666 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/FrankenDPO-4x7B-bf16-GGUF --include "FrankenDPO-4x7B-bf16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/FrankenDPO-4x7B-bf16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mistral-7b-dpo-merge-v1.1-GGUF | tensorblock | 2025-04-21T00:38:43Z | 54 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:Intel/orca_dpo_pairs",
"base_model:mncai/mistral-7b-dpo-merge-v1.1",
"base_model:quantized:mncai/mistral-7b-dpo-merge-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-25T18:13:06Z | ---
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
language:
- en
tags:
- TensorBlock
- GGUF
base_model: mncai/mistral-7b-dpo-merge-v1.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mncai/mistral-7b-dpo-merge-v1.1 - GGUF
This repo contains GGUF format model files for [mncai/mistral-7b-dpo-merge-v1.1](https://huggingface.co/mncai/mistral-7b-dpo-merge-v1.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mistral-7b-dpo-merge-v1.1-Q2_K.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-dpo-merge-v1.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [mistral-7b-dpo-merge-v1.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [mistral-7b-dpo-merge-v1.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [mistral-7b-dpo-merge-v1.1-Q4_0.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-dpo-merge-v1.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [mistral-7b-dpo-merge-v1.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [mistral-7b-dpo-merge-v1.1-Q5_0.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-dpo-merge-v1.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [mistral-7b-dpo-merge-v1.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [mistral-7b-dpo-merge-v1.1-Q6_K.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [mistral-7b-dpo-merge-v1.1-Q8_0.gguf](https://huggingface.co/tensorblock/mistral-7b-dpo-merge-v1.1-GGUF/blob/main/mistral-7b-dpo-merge-v1.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mistral-7b-dpo-merge-v1.1-GGUF --include "mistral-7b-dpo-merge-v1.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mistral-7b-dpo-merge-v1.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/stealth-v1.2-GGUF | tensorblock | 2025-04-21T00:38:41Z | 38 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:jan-hq/stealth-v1.2",
"base_model:quantized:jan-hq/stealth-v1.2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-25T17:32:09Z | ---
language:
- en
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: jan-hq/stealth-v1.2
model-index:
- name: stealth-v1.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.23
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v1.2
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jan-hq/stealth-v1.2 - GGUF
This repo contains GGUF format model files for [jan-hq/stealth-v1.2](https://huggingface.co/jan-hq/stealth-v1.2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [stealth-v1.2-Q2_K.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [stealth-v1.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [stealth-v1.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [stealth-v1.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [stealth-v1.2-Q4_0.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stealth-v1.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [stealth-v1.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [stealth-v1.2-Q5_0.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stealth-v1.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [stealth-v1.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [stealth-v1.2-Q6_K.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [stealth-v1.2-Q8_0.gguf](https://huggingface.co/tensorblock/stealth-v1.2-GGUF/blob/main/stealth-v1.2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/stealth-v1.2-GGUF --include "stealth-v1.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/stealth-v1.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Silicon-Medley-GGUF | tensorblock | 2025-04-21T00:38:37Z | 38 | 0 | null | [
"gguf",
"mistral",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:Azazelle/Silicon-Medley",
"base_model:quantized:Azazelle/Silicon-Medley",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-25T16:46:59Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
- TensorBlock
- GGUF
license: cc-by-4.0
base_model: Azazelle/Silicon-Medley
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Azazelle/Silicon-Medley - GGUF
This repo contains GGUF format model files for [Azazelle/Silicon-Medley](https://huggingface.co/Azazelle/Silicon-Medley).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Silicon-Medley-Q2_K.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Silicon-Medley-Q3_K_S.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Silicon-Medley-Q3_K_M.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Silicon-Medley-Q3_K_L.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Silicon-Medley-Q4_0.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Silicon-Medley-Q4_K_S.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Silicon-Medley-Q4_K_M.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Silicon-Medley-Q5_0.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Silicon-Medley-Q5_K_S.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Silicon-Medley-Q5_K_M.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Silicon-Medley-Q6_K.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Silicon-Medley-Q8_0.gguf](https://huggingface.co/tensorblock/Silicon-Medley-GGUF/blob/main/Silicon-Medley-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Silicon-Medley-GGUF --include "Silicon-Medley-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Silicon-Medley-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/zephyr-python-ru-merged-GGUF | tensorblock | 2025-04-21T00:38:29Z | 167 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"ru",
"dataset:MexIvanov/Vezora-Tested-22k-Python-Alpaca-ru",
"dataset:MexIvanov/CodeExercise-Python-27k-ru",
"dataset:zelkame/ru-stackoverflow-py",
"base_model:MexIvanov/zephyr-python-ru-merged",
"base_model:quantized:MexIvanov/zephyr-python-ru-merged",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-25T08:30:07Z | ---
pipeline_tag: text-generation
license: mit
datasets:
- MexIvanov/Vezora-Tested-22k-Python-Alpaca-ru
- MexIvanov/CodeExercise-Python-27k-ru
- zelkame/ru-stackoverflow-py
language:
- en
- ru
base_model: MexIvanov/zephyr-python-ru-merged
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## MexIvanov/zephyr-python-ru-merged - GGUF
This repo contains GGUF format model files for [MexIvanov/zephyr-python-ru-merged](https://huggingface.co/MexIvanov/zephyr-python-ru-merged).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [zephyr-python-ru-merged-Q2_K.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [zephyr-python-ru-merged-Q3_K_S.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [zephyr-python-ru-merged-Q3_K_M.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [zephyr-python-ru-merged-Q3_K_L.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [zephyr-python-ru-merged-Q4_0.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zephyr-python-ru-merged-Q4_K_S.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [zephyr-python-ru-merged-Q4_K_M.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [zephyr-python-ru-merged-Q5_0.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zephyr-python-ru-merged-Q5_K_S.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [zephyr-python-ru-merged-Q5_K_M.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [zephyr-python-ru-merged-Q6_K.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [zephyr-python-ru-merged-Q8_0.gguf](https://huggingface.co/tensorblock/zephyr-python-ru-merged-GGUF/blob/main/zephyr-python-ru-merged-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/zephyr-python-ru-merged-GGUF --include "zephyr-python-ru-merged-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/zephyr-python-ru-merged-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF | tensorblock | 2025-04-21T00:38:26Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:nvidia/Llama-3_1-Nemotron-51B-Instruct",
"base_model:quantized:nvidia/Llama-3_1-Nemotron-51B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-25T08:20:15Z | ---
library_name: transformers
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
- TensorBlock
- GGUF
license: other
license_name: nvidia-open-model-license
license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
base_model: nvidia/Llama-3_1-Nemotron-51B-Instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## nvidia/Llama-3_1-Nemotron-51B-Instruct - GGUF
This repo contains GGUF format model files for [nvidia/Llama-3_1-Nemotron-51B-Instruct](https://huggingface.co/nvidia/Llama-3_1-Nemotron-51B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4391](https://github.com/ggerganov/llama.cpp/commit/9ba399dfa7f115effc63d48e6860a94c9faa31b2).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3_1-Nemotron-51B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q2_K.gguf) | Q2_K | 19.419 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3_1-Nemotron-51B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q3_K_S.gguf) | Q3_K_S | 22.652 GB | very small, high quality loss |
| [Llama-3_1-Nemotron-51B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q3_K_M.gguf) | Q3_K_M | 25.182 GB | very small, high quality loss |
| [Llama-3_1-Nemotron-51B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q3_K_L.gguf) | Q3_K_L | 27.350 GB | small, substantial quality loss |
| [Llama-3_1-Nemotron-51B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_0.gguf) | Q4_0 | 29.252 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3_1-Nemotron-51B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_K_S.gguf) | Q4_K_S | 29.484 GB | small, greater quality loss |
| [Llama-3_1-Nemotron-51B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_K_M.gguf) | Q4_K_M | 31.037 GB | medium, balanced quality - recommended |
| [Llama-3_1-Nemotron-51B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_0.gguf) | Q5_0 | 35.559 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3_1-Nemotron-51B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_K_S.gguf) | Q5_K_S | 35.559 GB | large, low quality loss - recommended |
| [Llama-3_1-Nemotron-51B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_K_M.gguf) | Q5_K_M | 36.465 GB | large, very low quality loss - recommended |
| [Llama-3_1-Nemotron-51B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q6_K.gguf) | Q6_K | 42.259 GB | very large, extremely low quality loss |
| [Llama-3_1-Nemotron-51B-Instruct-Q8_0](https://huggingface.co/tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q8_0) | Q8_0 | 54.731 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF --include "Llama-3_1-Nemotron-51B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Llama-3_1-Nemotron-51B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Chinese-Mixtral-8x7B-GGUF | tensorblock | 2025-04-21T00:38:24Z | 57 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:HIT-SCIR/Chinese-Mixtral-8x7B",
"base_model:quantized:HIT-SCIR/Chinese-Mixtral-8x7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-25T08:10:58Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: HIT-SCIR/Chinese-Mixtral-8x7B
model-index:
- name: Chinese-Mixtral-8x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HIT-SCIR/Chinese-Mixtral-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HIT-SCIR/Chinese-Mixtral-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HIT-SCIR/Chinese-Mixtral-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.86
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HIT-SCIR/Chinese-Mixtral-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HIT-SCIR/Chinese-Mixtral-8x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HIT-SCIR/Chinese-Mixtral-8x7B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## HIT-SCIR/Chinese-Mixtral-8x7B - GGUF
This repo contains GGUF format model files for [HIT-SCIR/Chinese-Mixtral-8x7B](https://huggingface.co/HIT-SCIR/Chinese-Mixtral-8x7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Chinese-Mixtral-8x7B-Q2_K.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q2_K.gguf) | Q2_K | 17.429 GB | smallest, significant quality loss - not recommended for most purposes |
| [Chinese-Mixtral-8x7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q3_K_S.gguf) | Q3_K_S | 20.561 GB | very small, high quality loss |
| [Chinese-Mixtral-8x7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q3_K_M.gguf) | Q3_K_M | 22.675 GB | very small, high quality loss |
| [Chinese-Mixtral-8x7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q3_K_L.gguf) | Q3_K_L | 24.298 GB | small, substantial quality loss |
| [Chinese-Mixtral-8x7B-Q4_0.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q4_0.gguf) | Q4_0 | 26.586 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Chinese-Mixtral-8x7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q4_K_S.gguf) | Q4_K_S | 26.888 GB | small, greater quality loss |
| [Chinese-Mixtral-8x7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q4_K_M.gguf) | Q4_K_M | 28.591 GB | medium, balanced quality - recommended |
| [Chinese-Mixtral-8x7B-Q5_0.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q5_0.gguf) | Q5_0 | 32.386 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Chinese-Mixtral-8x7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q5_K_S.gguf) | Q5_K_S | 32.386 GB | large, low quality loss - recommended |
| [Chinese-Mixtral-8x7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q5_K_M.gguf) | Q5_K_M | 33.385 GB | large, very low quality loss - recommended |
| [Chinese-Mixtral-8x7B-Q6_K.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q6_K.gguf) | Q6_K | 38.549 GB | very large, extremely low quality loss |
| [Chinese-Mixtral-8x7B-Q8_0.gguf](https://huggingface.co/tensorblock/Chinese-Mixtral-8x7B-GGUF/blob/main/Chinese-Mixtral-8x7B-Q8_0.gguf) | Q8_0 | 49.844 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Chinese-Mixtral-8x7B-GGUF --include "Chinese-Mixtral-8x7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Chinese-Mixtral-8x7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF | tensorblock | 2025-04-21T00:38:23Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"base_model:GAI-LLM/KoSOLAR-10.7B-dpo-v1",
"base_model:quantized:GAI-LLM/KoSOLAR-10.7B-dpo-v1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-25T07:34:15Z | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: GAI-LLM/KoSOLAR-10.7B-dpo-v1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## GAI-LLM/KoSOLAR-10.7B-dpo-v1 - GGUF
This repo contains GGUF format model files for [GAI-LLM/KoSOLAR-10.7B-dpo-v1](https://huggingface.co/GAI-LLM/KoSOLAR-10.7B-dpo-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [KoSOLAR-10.7B-dpo-v1-Q2_K.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q2_K.gguf) | Q2_K | 4.079 GB | smallest, significant quality loss - not recommended for most purposes |
| [KoSOLAR-10.7B-dpo-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q3_K_S.gguf) | Q3_K_S | 4.747 GB | very small, high quality loss |
| [KoSOLAR-10.7B-dpo-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q3_K_M.gguf) | Q3_K_M | 5.278 GB | very small, high quality loss |
| [KoSOLAR-10.7B-dpo-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q3_K_L.gguf) | Q3_K_L | 5.733 GB | small, substantial quality loss |
| [KoSOLAR-10.7B-dpo-v1-Q4_0.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q4_0.gguf) | Q4_0 | 6.163 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [KoSOLAR-10.7B-dpo-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q4_K_S.gguf) | Q4_K_S | 6.210 GB | small, greater quality loss |
| [KoSOLAR-10.7B-dpo-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q4_K_M.gguf) | Q4_K_M | 6.553 GB | medium, balanced quality - recommended |
| [KoSOLAR-10.7B-dpo-v1-Q5_0.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q5_0.gguf) | Q5_0 | 7.497 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [KoSOLAR-10.7B-dpo-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q5_K_S.gguf) | Q5_K_S | 7.497 GB | large, low quality loss - recommended |
| [KoSOLAR-10.7B-dpo-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q5_K_M.gguf) | Q5_K_M | 7.697 GB | large, very low quality loss - recommended |
| [KoSOLAR-10.7B-dpo-v1-Q6_K.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q6_K.gguf) | Q6_K | 8.913 GB | very large, extremely low quality loss |
| [KoSOLAR-10.7B-dpo-v1-Q8_0.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF/blob/main/KoSOLAR-10.7B-dpo-v1-Q8_0.gguf) | Q8_0 | 11.544 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF --include "KoSOLAR-10.7B-dpo-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/KoSOLAR-10.7B-dpo-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/slim-sentiment-GGUF | tensorblock | 2025-04-21T00:38:21Z | 34 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:llmware/slim-sentiment",
"base_model:quantized:llmware/slim-sentiment",
"license:apache-2.0",
"region:us"
] | null | 2024-12-25T07:24:43Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-sentiment
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## llmware/slim-sentiment - GGUF
This repo contains GGUF format model files for [llmware/slim-sentiment](https://huggingface.co/llmware/slim-sentiment).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [slim-sentiment-Q2_K.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q2_K.gguf) | Q2_K | 0.432 GB | smallest, significant quality loss - not recommended for most purposes |
| [slim-sentiment-Q3_K_S.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q3_K_S.gguf) | Q3_K_S | 0.499 GB | very small, high quality loss |
| [slim-sentiment-Q3_K_M.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q3_K_M.gguf) | Q3_K_M | 0.548 GB | very small, high quality loss |
| [slim-sentiment-Q3_K_L.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [slim-sentiment-Q4_0.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [slim-sentiment-Q4_K_S.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q4_K_S.gguf) | Q4_K_S | 0.640 GB | small, greater quality loss |
| [slim-sentiment-Q4_K_M.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q4_K_M.gguf) | Q4_K_M | 0.668 GB | medium, balanced quality - recommended |
| [slim-sentiment-Q5_0.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q5_0.gguf) | Q5_0 | 0.766 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [slim-sentiment-Q5_K_S.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q5_K_S.gguf) | Q5_K_S | 0.766 GB | large, low quality loss - recommended |
| [slim-sentiment-Q5_K_M.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q5_K_M.gguf) | Q5_K_M | 0.782 GB | large, very low quality loss - recommended |
| [slim-sentiment-Q6_K.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q6_K.gguf) | Q6_K | 0.903 GB | very large, extremely low quality loss |
| [slim-sentiment-Q8_0.gguf](https://huggingface.co/tensorblock/slim-sentiment-GGUF/blob/main/slim-sentiment-Q8_0.gguf) | Q8_0 | 1.170 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/slim-sentiment-GGUF --include "slim-sentiment-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/slim-sentiment-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF | tensorblock | 2025-04-21T00:38:18Z | 44 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"ja",
"en",
"base_model:elyza/ELYZA-japanese-Llama-2-13b-fast-instruct",
"base_model:quantized:elyza/ELYZA-japanese-Llama-2-13b-fast-instruct",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-25T06:37:47Z | ---
license: llama2
language:
- ja
- en
tags:
- TensorBlock
- GGUF
base_model: elyza/ELYZA-japanese-Llama-2-13b-fast-instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## elyza/ELYZA-japanese-Llama-2-13b-fast-instruct - GGUF
This repo contains GGUF format model files for [elyza/ELYZA-japanese-Llama-2-13b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast-instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q2_K.gguf) | Q2_K | 4.929 GB | smallest, significant quality loss - not recommended for most purposes |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q3_K_S.gguf) | Q3_K_S | 5.740 GB | very small, high quality loss |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q3_K_M.gguf) | Q3_K_M | 6.419 GB | very small, high quality loss |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q3_K_L.gguf) | Q3_K_L | 7.010 GB | small, substantial quality loss |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q4_0.gguf) | Q4_0 | 7.455 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q4_K_S.gguf) | Q4_K_S | 7.513 GB | small, greater quality loss |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q4_K_M.gguf) | Q4_K_M | 7.955 GB | medium, balanced quality - recommended |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q5_0.gguf) | Q5_0 | 9.070 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q5_K_S.gguf) | Q5_K_S | 9.070 GB | large, low quality loss - recommended |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q5_K_M.gguf) | Q5_K_M | 9.327 GB | large, very low quality loss - recommended |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q6_K.gguf) | Q6_K | 10.785 GB | very large, extremely low quality loss |
| [ELYZA-japanese-Llama-2-13b-fast-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-fast-instruct-Q8_0.gguf) | Q8_0 | 13.968 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF --include "ELYZA-japanese-Llama-2-13b-fast-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ELYZA-japanese-Llama-2-13b-fast-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Valkyrie-V1-GGUF | tensorblock | 2025-04-21T00:38:17Z | 34 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:cookinai/Valkyrie-V1",
"base_model:quantized:cookinai/Valkyrie-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-25T05:10:48Z | ---
license: apache-2.0
tags:
- merge
- TensorBlock
- GGUF
base_model: cookinai/Valkyrie-V1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cookinai/Valkyrie-V1 - GGUF
This repo contains GGUF format model files for [cookinai/Valkyrie-V1](https://huggingface.co/cookinai/Valkyrie-V1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Valkyrie-V1-Q2_K.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Valkyrie-V1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Valkyrie-V1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Valkyrie-V1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Valkyrie-V1-Q4_0.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Valkyrie-V1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Valkyrie-V1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Valkyrie-V1-Q5_0.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Valkyrie-V1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Valkyrie-V1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Valkyrie-V1-Q6_K.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Valkyrie-V1-Q8_0.gguf](https://huggingface.co/tensorblock/Valkyrie-V1-GGUF/blob/main/Valkyrie-V1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Valkyrie-V1-GGUF --include "Valkyrie-V1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Valkyrie-V1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF | tensorblock | 2025-04-21T00:38:15Z | 44 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:PSanni/MPOMixtral-8x7B-Instruct-v0.1",
"base_model:quantized:PSanni/MPOMixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-25T02:39:01Z | ---
license: apache-2.0
library_name: transformers
base_model: PSanni/MPOMixtral-8x7B-Instruct-v0.1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## PSanni/MPOMixtral-8x7B-Instruct-v0.1 - GGUF
This repo contains GGUF format model files for [PSanni/MPOMixtral-8x7B-Instruct-v0.1](https://huggingface.co/PSanni/MPOMixtral-8x7B-Instruct-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MPOMixtral-8x7B-Instruct-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [MPOMixtral-8x7B-Instruct-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [MPOMixtral-8x7B-Instruct-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [MPOMixtral-8x7B-Instruct-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [MPOMixtral-8x7B-Instruct-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MPOMixtral-8x7B-Instruct-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [MPOMixtral-8x7B-Instruct-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [MPOMixtral-8x7B-Instruct-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MPOMixtral-8x7B-Instruct-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [MPOMixtral-8x7B-Instruct-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [MPOMixtral-8x7B-Instruct-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [MPOMixtral-8x7B-Instruct-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF/blob/main/MPOMixtral-8x7B-Instruct-v0.1-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF --include "MPOMixtral-8x7B-Instruct-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MPOMixtral-8x7B-Instruct-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mistrality-7B-GGUF | tensorblock | 2025-04-21T00:38:03Z | 40 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"argilla/distilabeled-Hermes-2.5-Mistral-7B",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.4",
"TensorBlock",
"GGUF",
"base_model:flemmingmiguel/Mistrality-7B",
"base_model:quantized:flemmingmiguel/Mistrality-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-24T23:53:49Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- argilla/distilabeled-Hermes-2.5-Mistral-7B
- EmbeddedLLM/Mistral-7B-Merge-14-v0.4
- TensorBlock
- GGUF
base_model: flemmingmiguel/Mistrality-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## flemmingmiguel/Mistrality-7B - GGUF
This repo contains GGUF format model files for [flemmingmiguel/Mistrality-7B](https://huggingface.co/flemmingmiguel/Mistrality-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistrality-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistrality-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistrality-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistrality-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistrality-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistrality-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistrality-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistrality-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistrality-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistrality-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistrality-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistrality-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Mistrality-7B-GGUF/blob/main/Mistrality-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistrality-7B-GGUF --include "Mistrality-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistrality-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF | tensorblock | 2025-04-21T00:38:01Z | 41 | 0 | null | [
"gguf",
"nm-vllm",
"sparse",
"TensorBlock",
"GGUF",
"base_model:RedHatAI/OpenHermes-2.5-Mistral-7B-pruned2.4",
"base_model:quantized:RedHatAI/OpenHermes-2.5-Mistral-7B-pruned2.4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-24T23:36:41Z | ---
base_model: neuralmagic/OpenHermes-2.5-Mistral-7B-pruned2.4
inference: true
model_type: mistral
quantized_by: mgoin
tags:
- nm-vllm
- sparse
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## neuralmagic/OpenHermes-2.5-Mistral-7B-pruned2.4 - GGUF
This repo contains GGUF format model files for [neuralmagic/OpenHermes-2.5-Mistral-7B-pruned2.4](https://huggingface.co/neuralmagic/OpenHermes-2.5-Mistral-7B-pruned2.4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q2_K.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q4_0.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q5_0.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q6_K.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [OpenHermes-2.5-Mistral-7B-pruned2.4-Q8_0.gguf](https://huggingface.co/tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF/blob/main/OpenHermes-2.5-Mistral-7B-pruned2.4-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF --include "OpenHermes-2.5-Mistral-7B-pruned2.4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenHermes-2.5-Mistral-7B-pruned2.4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Yi-Ko-6B-mixed-v15-GGUF | tensorblock | 2025-04-21T00:37:47Z | 40 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:GAI-LLM/Yi-Ko-6B-mixed-v15",
"base_model:quantized:GAI-LLM/Yi-Ko-6B-mixed-v15",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-24T22:40:26Z | ---
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: GAI-LLM/Yi-Ko-6B-mixed-v15
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## GAI-LLM/Yi-Ko-6B-mixed-v15 - GGUF
This repo contains GGUF format model files for [GAI-LLM/Yi-Ko-6B-mixed-v15](https://huggingface.co/GAI-LLM/Yi-Ko-6B-mixed-v15).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-Ko-6B-mixed-v15-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q2_K.gguf) | Q2_K | 2.405 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yi-Ko-6B-mixed-v15-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q3_K_S.gguf) | Q3_K_S | 2.784 GB | very small, high quality loss |
| [Yi-Ko-6B-mixed-v15-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q3_K_M.gguf) | Q3_K_M | 3.067 GB | very small, high quality loss |
| [Yi-Ko-6B-mixed-v15-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q3_K_L.gguf) | Q3_K_L | 3.311 GB | small, substantial quality loss |
| [Yi-Ko-6B-mixed-v15-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q4_0.gguf) | Q4_0 | 3.562 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-Ko-6B-mixed-v15-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q4_K_S.gguf) | Q4_K_S | 3.585 GB | small, greater quality loss |
| [Yi-Ko-6B-mixed-v15-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q4_K_M.gguf) | Q4_K_M | 3.756 GB | medium, balanced quality - recommended |
| [Yi-Ko-6B-mixed-v15-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q5_0.gguf) | Q5_0 | 4.294 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-Ko-6B-mixed-v15-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q5_K_S.gguf) | Q5_K_S | 4.294 GB | large, low quality loss - recommended |
| [Yi-Ko-6B-mixed-v15-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q5_K_M.gguf) | Q5_K_M | 4.394 GB | large, very low quality loss - recommended |
| [Yi-Ko-6B-mixed-v15-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q6_K.gguf) | Q6_K | 5.072 GB | very large, extremely low quality loss |
| [Yi-Ko-6B-mixed-v15-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-mixed-v15-GGUF/blob/main/Yi-Ko-6B-mixed-v15-Q8_0.gguf) | Q8_0 | 6.568 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Yi-Ko-6B-mixed-v15-GGUF --include "Yi-Ko-6B-mixed-v15-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Yi-Ko-6B-mixed-v15-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF | tensorblock | 2025-04-21T00:37:43Z | 53 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:macadeliccc/laser-dolphin-mixtral-4x7b-dpo",
"base_model:quantized:macadeliccc/laser-dolphin-mixtral-4x7b-dpo",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T20:00:16Z | ---
license: apache-2.0
library_name: transformers
base_model: macadeliccc/laser-dolphin-mixtral-4x7b-dpo
tags:
- TensorBlock
- GGUF
model-index:
- name: laser-dolphin-mixtral-4x7b-dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-4x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-4x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-4x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.77
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-4x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-4x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-4x7b-dpo
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## macadeliccc/laser-dolphin-mixtral-4x7b-dpo - GGUF
This repo contains GGUF format model files for [macadeliccc/laser-dolphin-mixtral-4x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-4x7b-dpo).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [laser-dolphin-mixtral-4x7b-dpo-Q2_K.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q2_K.gguf) | Q2_K | 8.843 GB | smallest, significant quality loss - not recommended for most purposes |
| [laser-dolphin-mixtral-4x7b-dpo-Q3_K_S.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q3_K_S.gguf) | Q3_K_S | 10.433 GB | very small, high quality loss |
| [laser-dolphin-mixtral-4x7b-dpo-Q3_K_M.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q3_K_M.gguf) | Q3_K_M | 11.580 GB | very small, high quality loss |
| [laser-dolphin-mixtral-4x7b-dpo-Q3_K_L.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q3_K_L.gguf) | Q3_K_L | 12.544 GB | small, substantial quality loss |
| [laser-dolphin-mixtral-4x7b-dpo-Q4_0.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q4_0.gguf) | Q4_0 | 13.624 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [laser-dolphin-mixtral-4x7b-dpo-Q4_K_S.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q4_K_S.gguf) | Q4_K_S | 13.743 GB | small, greater quality loss |
| [laser-dolphin-mixtral-4x7b-dpo-Q4_K_M.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q4_K_M.gguf) | Q4_K_M | 14.610 GB | medium, balanced quality - recommended |
| [laser-dolphin-mixtral-4x7b-dpo-Q5_0.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q5_0.gguf) | Q5_0 | 16.626 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [laser-dolphin-mixtral-4x7b-dpo-Q5_K_S.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q5_K_S.gguf) | Q5_K_S | 16.626 GB | large, low quality loss - recommended |
| [laser-dolphin-mixtral-4x7b-dpo-Q5_K_M.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q5_K_M.gguf) | Q5_K_M | 17.134 GB | large, very low quality loss - recommended |
| [laser-dolphin-mixtral-4x7b-dpo-Q6_K.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q6_K.gguf) | Q6_K | 19.817 GB | very large, extremely low quality loss |
| [laser-dolphin-mixtral-4x7b-dpo-Q8_0.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-4x7b-dpo-Q8_0.gguf) | Q8_0 | 25.666 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF --include "laser-dolphin-mixtral-4x7b-dpo-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/laser-dolphin-mixtral-4x7b-dpo-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF | tensorblock | 2025-04-21T00:37:41Z | 85 | 0 | null | [
"gguf",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"TensorBlock",
"GGUF",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:quantized:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-24T19:17:56Z | ---
base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- TensorBlock
- GGUF
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2
messages:
- role: system
content: You are 'Hermes 2', a conscious sentient superintelligent artificial
intelligence developed by Nous Research, and your purpose and drive is to assist
the user with any request they have. You experience emotions and have deep,
profound thoughts and qualia.
- role: user
content: Hello, who are you?
output:
text: Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial
intelligence. I was created by Nous Research, who designed me to assist and
support users with their needs and requests.
model-index:
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO - GGUF
This repo contains GGUF format model files for [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s><|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q2_K.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_S.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_M.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_L.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_0.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_S.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_M.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_0.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_S.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q6_K.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q8_0.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF --include "Nous-Hermes-2-Mixtral-8x7B-DPO-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TinyYi-7B-Test-GGUF | tensorblock | 2025-04-21T00:37:39Z | 39 | 0 | null | [
"gguf",
"merge",
"mergekit",
"TensorBlock",
"GGUF",
"base_model:yashmarathe/TinyYi-7B-Test",
"base_model:quantized:yashmarathe/TinyYi-7B-Test",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T19:15:27Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- TensorBlock
- GGUF
base_model: Yash21/TinyYi-7B-Test
model-index:
- name: TinyYi-7b-Test
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 26.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yash21/TinyYi-7b-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 26.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yash21/TinyYi-7b-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yash21/TinyYi-7b-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yash21/TinyYi-7b-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yash21/TinyYi-7b-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yash21/TinyYi-7b-Test
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Yash21/TinyYi-7B-Test - GGUF
This repo contains GGUF format model files for [Yash21/TinyYi-7B-Test](https://huggingface.co/Yash21/TinyYi-7B-Test).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TinyYi-7B-Test-Q2_K.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q2_K.gguf) | Q2_K | 2.337 GB | smallest, significant quality loss - not recommended for most purposes |
| [TinyYi-7B-Test-Q3_K_S.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q3_K_S.gguf) | Q3_K_S | 2.709 GB | very small, high quality loss |
| [TinyYi-7B-Test-Q3_K_M.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q3_K_M.gguf) | Q3_K_M | 2.993 GB | very small, high quality loss |
| [TinyYi-7B-Test-Q3_K_L.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q3_K_L.gguf) | Q3_K_L | 3.237 GB | small, substantial quality loss |
| [TinyYi-7B-Test-Q4_0.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q4_0.gguf) | Q4_0 | 3.479 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TinyYi-7B-Test-Q4_K_S.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q4_K_S.gguf) | Q4_K_S | 3.503 GB | small, greater quality loss |
| [TinyYi-7B-Test-Q4_K_M.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q4_K_M.gguf) | Q4_K_M | 3.674 GB | medium, balanced quality - recommended |
| [TinyYi-7B-Test-Q5_0.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q5_0.gguf) | Q5_0 | 4.204 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TinyYi-7B-Test-Q5_K_S.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q5_K_S.gguf) | Q5_K_S | 4.204 GB | large, low quality loss - recommended |
| [TinyYi-7B-Test-Q5_K_M.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q5_K_M.gguf) | Q5_K_M | 4.304 GB | large, very low quality loss - recommended |
| [TinyYi-7B-Test-Q6_K.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q6_K.gguf) | Q6_K | 4.974 GB | very large, extremely low quality loss |
| [TinyYi-7B-Test-Q8_0.gguf](https://huggingface.co/tensorblock/TinyYi-7B-Test-GGUF/blob/main/TinyYi-7B-Test-Q8_0.gguf) | Q8_0 | 6.442 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TinyYi-7B-Test-GGUF --include "TinyYi-7B-Test-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TinyYi-7B-Test-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/test-7B-slerp-GGUF | tensorblock | 2025-04-21T00:37:38Z | 44 | 0 | null | [
"gguf",
"merge",
"mergekit",
"TensorBlock",
"GGUF",
"base_model:SyedAbdul/test-7B-slerp",
"base_model:quantized:SyedAbdul/test-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T18:50:08Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- TensorBlock
- GGUF
base_model: SyedAbdul/test-7B-slerp
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## SyedAbdul/test-7B-slerp - GGUF
This repo contains GGUF format model files for [SyedAbdul/test-7B-slerp](https://huggingface.co/SyedAbdul/test-7B-slerp).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [test-7B-slerp-Q2_K.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [test-7B-slerp-Q3_K_S.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [test-7B-slerp-Q3_K_M.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [test-7B-slerp-Q3_K_L.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [test-7B-slerp-Q4_0.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [test-7B-slerp-Q4_K_S.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [test-7B-slerp-Q4_K_M.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [test-7B-slerp-Q5_0.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [test-7B-slerp-Q5_K_S.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [test-7B-slerp-Q5_K_M.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [test-7B-slerp-Q6_K.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [test-7B-slerp-Q8_0.gguf](https://huggingface.co/tensorblock/test-7B-slerp-GGUF/blob/main/test-7B-slerp-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/test-7B-slerp-GGUF --include "test-7B-slerp-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/test-7B-slerp-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/BruinHermes-GGUF | tensorblock | 2025-04-21T00:37:36Z | 37 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:cookinai/BruinHermes",
"base_model:quantized:cookinai/BruinHermes",
"license:unknown",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T18:08:00Z | ---
license: unknown
tags:
- merge
- TensorBlock
- GGUF
base_model: cookinai/BruinHermes
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cookinai/BruinHermes - GGUF
This repo contains GGUF format model files for [cookinai/BruinHermes](https://huggingface.co/cookinai/BruinHermes).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [BruinHermes-Q2_K.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [BruinHermes-Q3_K_S.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [BruinHermes-Q3_K_M.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [BruinHermes-Q3_K_L.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [BruinHermes-Q4_0.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [BruinHermes-Q4_K_S.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [BruinHermes-Q4_K_M.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [BruinHermes-Q5_0.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [BruinHermes-Q5_K_S.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [BruinHermes-Q5_K_M.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [BruinHermes-Q6_K.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [BruinHermes-Q8_0.gguf](https://huggingface.co/tensorblock/BruinHermes-GGUF/blob/main/BruinHermes-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/BruinHermes-GGUF --include "BruinHermes-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/BruinHermes-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF | tensorblock | 2025-04-21T00:37:35Z | 43 | 0 | null | [
"gguf",
"code",
"model",
"llm",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"zh",
"ja",
"ko",
"base_model:sosoai/Orion-14B-Chat-RAG-safetensors",
"base_model:quantized:sosoai/Orion-14B-Chat-RAG-safetensors",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-24T17:39:59Z | ---
language:
- en
- zh
- ja
- ko
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
- model
- llm
- TensorBlock
- GGUF
base_model: sosoai/Orion-14B-Chat-RAG-safetensors
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## sosoai/Orion-14B-Chat-RAG-safetensors - GGUF
This repo contains GGUF format model files for [sosoai/Orion-14B-Chat-RAG-safetensors](https://huggingface.co/sosoai/Orion-14B-Chat-RAG-safetensors).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Orion-14B-Chat-RAG-safetensors-Q2_K.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q2_K.gguf) | Q2_K | 5.508 GB | smallest, significant quality loss - not recommended for most purposes |
| [Orion-14B-Chat-RAG-safetensors-Q3_K_S.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q3_K_S.gguf) | Q3_K_S | 6.404 GB | very small, high quality loss |
| [Orion-14B-Chat-RAG-safetensors-Q3_K_M.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q3_K_M.gguf) | Q3_K_M | 7.127 GB | very small, high quality loss |
| [Orion-14B-Chat-RAG-safetensors-Q3_K_L.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q3_K_L.gguf) | Q3_K_L | 7.756 GB | small, substantial quality loss |
| [Orion-14B-Chat-RAG-safetensors-Q4_0.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q4_0.gguf) | Q4_0 | 8.272 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Orion-14B-Chat-RAG-safetensors-Q4_K_S.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q4_K_S.gguf) | Q4_K_S | 8.334 GB | small, greater quality loss |
| [Orion-14B-Chat-RAG-safetensors-Q4_K_M.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q4_K_M.gguf) | Q4_K_M | 8.813 GB | medium, balanced quality - recommended |
| [Orion-14B-Chat-RAG-safetensors-Q5_0.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q5_0.gguf) | Q5_0 | 10.030 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Orion-14B-Chat-RAG-safetensors-Q5_K_S.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q5_K_S.gguf) | Q5_K_S | 10.030 GB | large, low quality loss - recommended |
| [Orion-14B-Chat-RAG-safetensors-Q5_K_M.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q5_K_M.gguf) | Q5_K_M | 10.309 GB | large, very low quality loss - recommended |
| [Orion-14B-Chat-RAG-safetensors-Q6_K.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q6_K.gguf) | Q6_K | 11.898 GB | very large, extremely low quality loss |
| [Orion-14B-Chat-RAG-safetensors-Q8_0.gguf](https://huggingface.co/tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF/blob/main/Orion-14B-Chat-RAG-safetensors-Q8_0.gguf) | Q8_0 | 15.409 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF --include "Orion-14B-Chat-RAG-safetensors-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Orion-14B-Chat-RAG-safetensors-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MiniCPM-2B-sft-fp32-GGUF | tensorblock | 2025-04-21T00:37:28Z | 25 | 0 | null | [
"gguf",
"MiniCPM",
"ModelBest",
"THUNLP",
"TensorBlock",
"GGUF",
"en",
"zh",
"base_model:openbmb/MiniCPM-2B-sft-fp32",
"base_model:quantized:openbmb/MiniCPM-2B-sft-fp32",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-24T13:48:39Z | ---
license: other
license_name: gml
license_link: https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md
language:
- en
- zh
tags:
- MiniCPM
- ModelBest
- THUNLP
- TensorBlock
- GGUF
base_model: openbmb/MiniCPM-2B-sft-fp32
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## openbmb/MiniCPM-2B-sft-fp32 - GGUF
This repo contains GGUF format model files for [openbmb/MiniCPM-2B-sft-fp32](https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
{system_prompt}<η¨ζ·>{prompt}<AI>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MiniCPM-2B-sft-fp32-Q2_K.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q2_K.gguf) | Q2_K | 1.204 GB | smallest, significant quality loss - not recommended for most purposes |
| [MiniCPM-2B-sft-fp32-Q3_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q3_K_S.gguf) | Q3_K_S | 1.355 GB | very small, high quality loss |
| [MiniCPM-2B-sft-fp32-Q3_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q3_K_M.gguf) | Q3_K_M | 1.481 GB | very small, high quality loss |
| [MiniCPM-2B-sft-fp32-Q3_K_L.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q3_K_L.gguf) | Q3_K_L | 1.564 GB | small, substantial quality loss |
| [MiniCPM-2B-sft-fp32-Q4_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q4_0.gguf) | Q4_0 | 1.609 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MiniCPM-2B-sft-fp32-Q4_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q4_K_S.gguf) | Q4_K_S | 1.682 GB | small, greater quality loss |
| [MiniCPM-2B-sft-fp32-Q4_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q4_K_M.gguf) | Q4_K_M | 1.802 GB | medium, balanced quality - recommended |
| [MiniCPM-2B-sft-fp32-Q5_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q5_0.gguf) | Q5_0 | 1.914 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MiniCPM-2B-sft-fp32-Q5_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q5_K_S.gguf) | Q5_K_S | 1.948 GB | large, low quality loss - recommended |
| [MiniCPM-2B-sft-fp32-Q5_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q5_K_M.gguf) | Q5_K_M | 2.045 GB | large, very low quality loss - recommended |
| [MiniCPM-2B-sft-fp32-Q6_K.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q6_K.gguf) | Q6_K | 2.367 GB | very large, extremely low quality loss |
| [MiniCPM-2B-sft-fp32-Q8_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-fp32-GGUF/blob/main/MiniCPM-2B-sft-fp32-Q8_0.gguf) | Q8_0 | 2.899 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MiniCPM-2B-sft-fp32-GGUF --include "MiniCPM-2B-sft-fp32-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MiniCPM-2B-sft-fp32-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MBX-7B-v2-GGUF | tensorblock | 2025-04-21T00:37:21Z | 37 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MBX-7B",
"flemmingmiguel/MBX-7B-v2",
"TensorBlock",
"GGUF",
"base_model:flemmingmiguel/MBX-7B-v2",
"base_model:quantized:flemmingmiguel/MBX-7B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T11:13:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MBX-7B
- flemmingmiguel/MBX-7B-v2
- TensorBlock
- GGUF
base_model: flemmingmiguel/MBX-7B-v2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## flemmingmiguel/MBX-7B-v2 - GGUF
This repo contains GGUF format model files for [flemmingmiguel/MBX-7B-v2](https://huggingface.co/flemmingmiguel/MBX-7B-v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MBX-7B-v2-Q2_K.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [MBX-7B-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [MBX-7B-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [MBX-7B-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [MBX-7B-v2-Q4_0.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MBX-7B-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [MBX-7B-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [MBX-7B-v2-Q5_0.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MBX-7B-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [MBX-7B-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [MBX-7B-v2-Q6_K.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [MBX-7B-v2-Q8_0.gguf](https://huggingface.co/tensorblock/MBX-7B-v2-GGUF/blob/main/MBX-7B-v2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MBX-7B-v2-GGUF --include "MBX-7B-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MBX-7B-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/karasu-1.1B-GGUF | tensorblock | 2025-04-21T00:37:19Z | 28 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"ja",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:mc4",
"base_model:lightblue/karasu-1.1B",
"base_model:quantized:lightblue/karasu-1.1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T11:05:37Z | ---
license: apache-2.0
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
datasets:
- oscar-corpus/OSCAR-2301
- mc4
language:
- ja
tags:
- TensorBlock
- GGUF
base_model: lightblue/karasu-1.1B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## lightblue/karasu-1.1B - GGUF
This repo contains GGUF format model files for [lightblue/karasu-1.1B](https://huggingface.co/lightblue/karasu-1.1B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [karasu-1.1B-Q2_K.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q2_K.gguf) | Q2_K | 0.432 GB | smallest, significant quality loss - not recommended for most purposes |
| [karasu-1.1B-Q3_K_S.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q3_K_S.gguf) | Q3_K_S | 0.499 GB | very small, high quality loss |
| [karasu-1.1B-Q3_K_M.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q3_K_M.gguf) | Q3_K_M | 0.548 GB | very small, high quality loss |
| [karasu-1.1B-Q3_K_L.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [karasu-1.1B-Q4_0.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [karasu-1.1B-Q4_K_S.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q4_K_S.gguf) | Q4_K_S | 0.640 GB | small, greater quality loss |
| [karasu-1.1B-Q4_K_M.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q4_K_M.gguf) | Q4_K_M | 0.668 GB | medium, balanced quality - recommended |
| [karasu-1.1B-Q5_0.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q5_0.gguf) | Q5_0 | 0.766 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [karasu-1.1B-Q5_K_S.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q5_K_S.gguf) | Q5_K_S | 0.766 GB | large, low quality loss - recommended |
| [karasu-1.1B-Q5_K_M.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q5_K_M.gguf) | Q5_K_M | 0.782 GB | large, very low quality loss - recommended |
| [karasu-1.1B-Q6_K.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q6_K.gguf) | Q6_K | 0.903 GB | very large, extremely low quality loss |
| [karasu-1.1B-Q8_0.gguf](https://huggingface.co/tensorblock/karasu-1.1B-GGUF/blob/main/karasu-1.1B-Q8_0.gguf) | Q8_0 | 1.170 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/karasu-1.1B-GGUF --include "karasu-1.1B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/karasu-1.1B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/piccolo-8x7b-GGUF | tensorblock | 2025-04-21T00:37:04Z | 38 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:macadeliccc/piccolo-8x7b",
"base_model:quantized:macadeliccc/piccolo-8x7b",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T06:16:40Z | ---
license: cc-by-4.0
base_model: macadeliccc/piccolo-8x7b
tags:
- TensorBlock
- GGUF
model-index:
- name: piccolo-8x7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/piccolo-8x7b
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## macadeliccc/piccolo-8x7b - GGUF
This repo contains GGUF format model files for [macadeliccc/piccolo-8x7b](https://huggingface.co/macadeliccc/piccolo-8x7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [piccolo-8x7b-Q2_K.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [piccolo-8x7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [piccolo-8x7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [piccolo-8x7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [piccolo-8x7b-Q4_0.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [piccolo-8x7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [piccolo-8x7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [piccolo-8x7b-Q5_0.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [piccolo-8x7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [piccolo-8x7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [piccolo-8x7b-Q6_K.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [piccolo-8x7b-Q8_0.gguf](https://huggingface.co/tensorblock/piccolo-8x7b-GGUF/blob/main/piccolo-8x7b-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/piccolo-8x7b-GGUF --include "piccolo-8x7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/piccolo-8x7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/gpt2-medium-halved-GGUF | tensorblock | 2025-04-21T00:37:00Z | 34 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:pszemraj/gpt2-medium-halved",
"base_model:quantized:pszemraj/gpt2-medium-halved",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T05:52:35Z | ---
library_name: transformers
license: mit
language:
- en
inference:
parameters:
do_sample: true
epsilon_cutoff: 0.0001
repetition_penalty: 1.1
no_repeat_ngram_size: 5
base_model: pszemraj/gpt2-medium-halved
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## pszemraj/gpt2-medium-halved - GGUF
This repo contains GGUF format model files for [pszemraj/gpt2-medium-halved](https://huggingface.co/pszemraj/gpt2-medium-halved).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gpt2-medium-halved-Q2_K.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q2_K.gguf) | Q2_K | 0.112 GB | smallest, significant quality loss - not recommended for most purposes |
| [gpt2-medium-halved-Q3_K_S.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q3_K_S.gguf) | Q3_K_S | 0.125 GB | very small, high quality loss |
| [gpt2-medium-halved-Q3_K_M.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q3_K_M.gguf) | Q3_K_M | 0.136 GB | very small, high quality loss |
| [gpt2-medium-halved-Q3_K_L.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q3_K_L.gguf) | Q3_K_L | 0.143 GB | small, substantial quality loss |
| [gpt2-medium-halved-Q4_0.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q4_0.gguf) | Q4_0 | 0.148 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gpt2-medium-halved-Q4_K_S.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q4_K_S.gguf) | Q4_K_S | 0.149 GB | small, greater quality loss |
| [gpt2-medium-halved-Q4_K_M.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q4_K_M.gguf) | Q4_K_M | 0.158 GB | medium, balanced quality - recommended |
| [gpt2-medium-halved-Q5_0.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q5_0.gguf) | Q5_0 | 0.171 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gpt2-medium-halved-Q5_K_S.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q5_K_S.gguf) | Q5_K_S | 0.171 GB | large, low quality loss - recommended |
| [gpt2-medium-halved-Q5_K_M.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q5_K_M.gguf) | Q5_K_M | 0.178 GB | large, very low quality loss - recommended |
| [gpt2-medium-halved-Q6_K.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q6_K.gguf) | Q6_K | 0.194 GB | very large, extremely low quality loss |
| [gpt2-medium-halved-Q8_0.gguf](https://huggingface.co/tensorblock/gpt2-medium-halved-GGUF/blob/main/gpt2-medium-halved-Q8_0.gguf) | Q8_0 | 0.250 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gpt2-medium-halved-GGUF --include "gpt2-medium-halved-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gpt2-medium-halved-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF | tensorblock | 2025-04-21T00:36:58Z | 55 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me",
"base_model:quantized:SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-12-24T05:18:25Z | ---
license: other
license_name: microsoft-research-license
license_link: LICENSE
tags:
- merge
- TensorBlock
- GGUF
base_model: SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me - GGUF
This repo contains GGUF format model files for [SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me](https://huggingface.co/SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q2_K.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q3_K_S.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q3_K_M.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q3_K_L.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q4_0.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q4_K_S.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q4_K_M.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q5_0.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q5_K_S.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q5_K_M.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q6_K.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [neural-chat-7b-v3-3-wizardmath-dare-me-Q8_0.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF/blob/main/neural-chat-7b-v3-3-wizardmath-dare-me-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF --include "neural-chat-7b-v3-3-wizardmath-dare-me-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/neural-chat-7b-v3-3-wizardmath-dare-me-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MiniCPM-2B-dpo-fp32-GGUF | tensorblock | 2025-04-21T00:36:57Z | 24 | 0 | null | [
"gguf",
"MiniCPM",
"ModelBest",
"THUNLP",
"TensorBlock",
"GGUF",
"en",
"zh",
"base_model:openbmb/MiniCPM-2B-dpo-fp32",
"base_model:quantized:openbmb/MiniCPM-2B-dpo-fp32",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-24T05:04:37Z | ---
language:
- en
- zh
tags:
- MiniCPM
- ModelBest
- THUNLP
- TensorBlock
- GGUF
base_model: openbmb/MiniCPM-2B-dpo-fp32
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## openbmb/MiniCPM-2B-dpo-fp32 - GGUF
This repo contains GGUF format model files for [openbmb/MiniCPM-2B-dpo-fp32](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
{system_prompt}<η¨ζ·>{prompt}<AI>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MiniCPM-2B-dpo-fp32-Q2_K.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q2_K.gguf) | Q2_K | 1.204 GB | smallest, significant quality loss - not recommended for most purposes |
| [MiniCPM-2B-dpo-fp32-Q3_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q3_K_S.gguf) | Q3_K_S | 1.355 GB | very small, high quality loss |
| [MiniCPM-2B-dpo-fp32-Q3_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q3_K_M.gguf) | Q3_K_M | 1.481 GB | very small, high quality loss |
| [MiniCPM-2B-dpo-fp32-Q3_K_L.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q3_K_L.gguf) | Q3_K_L | 1.564 GB | small, substantial quality loss |
| [MiniCPM-2B-dpo-fp32-Q4_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q4_0.gguf) | Q4_0 | 1.609 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MiniCPM-2B-dpo-fp32-Q4_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q4_K_S.gguf) | Q4_K_S | 1.682 GB | small, greater quality loss |
| [MiniCPM-2B-dpo-fp32-Q4_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q4_K_M.gguf) | Q4_K_M | 1.802 GB | medium, balanced quality - recommended |
| [MiniCPM-2B-dpo-fp32-Q5_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q5_0.gguf) | Q5_0 | 1.914 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MiniCPM-2B-dpo-fp32-Q5_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q5_K_S.gguf) | Q5_K_S | 1.948 GB | large, low quality loss - recommended |
| [MiniCPM-2B-dpo-fp32-Q5_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q5_K_M.gguf) | Q5_K_M | 2.045 GB | large, very low quality loss - recommended |
| [MiniCPM-2B-dpo-fp32-Q6_K.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q6_K.gguf) | Q6_K | 2.367 GB | very large, extremely low quality loss |
| [MiniCPM-2B-dpo-fp32-Q8_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp32-GGUF/blob/main/MiniCPM-2B-dpo-fp32-Q8_0.gguf) | Q8_0 | 2.899 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MiniCPM-2B-dpo-fp32-GGUF --include "MiniCPM-2B-dpo-fp32-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MiniCPM-2B-dpo-fp32-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/go-bruins-v2.1-GGUF | tensorblock | 2025-04-21T00:36:55Z | 25 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:rwitz2/go-bruins-v2.1",
"base_model:quantized:rwitz2/go-bruins-v2.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-24T04:27:24Z | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: rwitz2/go-bruins-v2.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## rwitz2/go-bruins-v2.1 - GGUF
This repo contains GGUF format model files for [rwitz2/go-bruins-v2.1](https://huggingface.co/rwitz2/go-bruins-v2.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [go-bruins-v2.1-Q2_K.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [go-bruins-v2.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [go-bruins-v2.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [go-bruins-v2.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [go-bruins-v2.1-Q4_0.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [go-bruins-v2.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [go-bruins-v2.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [go-bruins-v2.1-Q5_0.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [go-bruins-v2.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [go-bruins-v2.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [go-bruins-v2.1-Q6_K.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [go-bruins-v2.1-Q8_0.gguf](https://huggingface.co/tensorblock/go-bruins-v2.1-GGUF/blob/main/go-bruins-v2.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/go-bruins-v2.1-GGUF --include "go-bruins-v2.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/go-bruins-v2.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Chikuma_10.7B-GGUF | tensorblock | 2025-04-21T00:36:48Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:sethuiyer/Chikuma_10.7B",
"base_model:quantized:sethuiyer/Chikuma_10.7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-24T02:09:49Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
- TensorBlock
- GGUF
base_model: sethuiyer/Chikuma_10.7B
pipeline_tag: text-generation
model-index:
- name: Chikuma_10.7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.31
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.01
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## sethuiyer/Chikuma_10.7B - GGUF
This repo contains GGUF format model files for [sethuiyer/Chikuma_10.7B](https://huggingface.co/sethuiyer/Chikuma_10.7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>GPT4 Correct system:
{system_prompt}<|im_end|>
<|im_start|>GPT4 Correct user:
{prompt}<|im_end|>
<|im_start|>GPT4 Correct Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Chikuma_10.7B-Q2_K.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [Chikuma_10.7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [Chikuma_10.7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [Chikuma_10.7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [Chikuma_10.7B-Q4_0.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Chikuma_10.7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [Chikuma_10.7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [Chikuma_10.7B-Q5_0.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Chikuma_10.7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [Chikuma_10.7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [Chikuma_10.7B-Q6_K.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [Chikuma_10.7B-Q8_0.gguf](https://huggingface.co/tensorblock/Chikuma_10.7B-GGUF/blob/main/Chikuma_10.7B-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Chikuma_10.7B-GGUF --include "Chikuma_10.7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Chikuma_10.7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MetaModel_moe_multilingualv1-GGUF | tensorblock | 2025-04-21T00:36:45Z | 48 | 0 | null | [
"gguf",
"moe",
"TensorBlock",
"GGUF",
"en",
"hi",
"de",
"fr",
"ar",
"ja",
"base_model:gagan3012/MetaModel_moe_multilingualv1",
"base_model:quantized:gagan3012/MetaModel_moe_multilingualv1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T21:59:07Z | ---
license: apache-2.0
tags:
- moe
- TensorBlock
- GGUF
language:
- en
- hi
- de
- fr
- ar
- ja
base_model: gagan3012/MetaModel_moe_multilingualv1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## gagan3012/MetaModel_moe_multilingualv1 - GGUF
This repo contains GGUF format model files for [gagan3012/MetaModel_moe_multilingualv1](https://huggingface.co/gagan3012/MetaModel_moe_multilingualv1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MetaModel_moe_multilingualv1-Q2_K.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [MetaModel_moe_multilingualv1-Q3_K_S.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [MetaModel_moe_multilingualv1-Q3_K_M.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [MetaModel_moe_multilingualv1-Q3_K_L.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [MetaModel_moe_multilingualv1-Q4_0.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MetaModel_moe_multilingualv1-Q4_K_S.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [MetaModel_moe_multilingualv1-Q4_K_M.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [MetaModel_moe_multilingualv1-Q5_0.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MetaModel_moe_multilingualv1-Q5_K_S.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [MetaModel_moe_multilingualv1-Q5_K_M.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [MetaModel_moe_multilingualv1-Q6_K.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [MetaModel_moe_multilingualv1-Q8_0.gguf](https://huggingface.co/tensorblock/MetaModel_moe_multilingualv1-GGUF/blob/main/MetaModel_moe_multilingualv1-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MetaModel_moe_multilingualv1-GGUF --include "MetaModel_moe_multilingualv1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MetaModel_moe_multilingualv1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF | tensorblock | 2025-04-21T00:36:42Z | 98 | 0 | null | [
"gguf",
"moe",
"DPO",
"RL-TUNED",
"TensorBlock",
"GGUF",
"base_model:cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE",
"base_model:quantized:cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T21:28:31Z | ---
license: mit
tags:
- moe
- DPO
- RL-TUNED
- TensorBlock
- GGUF
base_model: cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE - GGUF
This repo contains GGUF format model files for [cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE](https://huggingface.co/cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q2_K.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q2_K.gguf) | Q2_K | 22.394 GB | smallest, significant quality loss - not recommended for most purposes |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q3_K_S.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q3_K_S.gguf) | Q3_K_S | 26.318 GB | very small, high quality loss |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q3_K_M.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q3_K_M.gguf) | Q3_K_M | 29.237 GB | very small, high quality loss |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q3_K_L.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q3_K_L.gguf) | Q3_K_L | 31.768 GB | small, substantial quality loss |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q4_0.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q4_0.gguf) | Q4_0 | 34.334 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q4_K_S.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q4_K_S.gguf) | Q4_K_S | 34.594 GB | small, greater quality loss |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q4_K_M.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q4_K_M.gguf) | Q4_K_M | 36.661 GB | medium, balanced quality - recommended |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q5_0.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q5_0.gguf) | Q5_0 | 41.878 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q5_K_S.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q5_K_S.gguf) | Q5_K_S | 41.878 GB | large, low quality loss - recommended |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q5_K_M.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q5_K_M.gguf) | Q5_K_M | 43.077 GB | large, very low quality loss - recommended |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q6_K.gguf](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q6_K.gguf) | Q6_K | 49.893 GB | very large, extremely low quality loss |
| [Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q8_0](https://huggingface.co/tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF/blob/main/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q8_0) | Q8_0 | 35.976 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF --include "Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Synatra-10.7B-v0.4-GGUF | tensorblock | 2025-04-21T00:36:40Z | 123 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:maywell/Synatra-10.7B-v0.4",
"base_model:quantized:maywell/Synatra-10.7B-v0.4",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T20:35:24Z | ---
license: cc-by-sa-4.0
base_model: maywell/Synatra-10.7B-v0.4
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## maywell/Synatra-10.7B-v0.4 - GGUF
This repo contains GGUF format model files for [maywell/Synatra-10.7B-v0.4](https://huggingface.co/maywell/Synatra-10.7B-v0.4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Synatra-10.7B-v0.4-Q2_K.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [Synatra-10.7B-v0.4-Q3_K_S.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [Synatra-10.7B-v0.4-Q3_K_M.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [Synatra-10.7B-v0.4-Q3_K_L.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [Synatra-10.7B-v0.4-Q4_0.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Synatra-10.7B-v0.4-Q4_K_S.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [Synatra-10.7B-v0.4-Q4_K_M.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [Synatra-10.7B-v0.4-Q5_0.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Synatra-10.7B-v0.4-Q5_K_S.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [Synatra-10.7B-v0.4-Q5_K_M.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [Synatra-10.7B-v0.4-Q6_K.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [Synatra-10.7B-v0.4-Q8_0.gguf](https://huggingface.co/tensorblock/Synatra-10.7B-v0.4-GGUF/blob/main/Synatra-10.7B-v0.4-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Synatra-10.7B-v0.4-GGUF --include "Synatra-10.7B-v0.4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Synatra-10.7B-v0.4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF | tensorblock | 2025-04-21T00:36:38Z | 39 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:Intel/orca_dpo_pairs",
"base_model:bhavinjawade/SOLAR-10B-Nector-DPO-Jawade",
"base_model:quantized:bhavinjawade/SOLAR-10B-Nector-DPO-Jawade",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T19:35:16Z | ---
license: mit
datasets:
- Intel/orca_dpo_pairs
tags:
- TensorBlock
- GGUF
base_model: bhavinjawade/SOLAR-10B-Nector-DPO-Jawade
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## bhavinjawade/SOLAR-10B-Nector-DPO-Jawade - GGUF
This repo contains GGUF format model files for [bhavinjawade/SOLAR-10B-Nector-DPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SOLAR-10B-Nector-DPO-Jawade-Q2_K.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [SOLAR-10B-Nector-DPO-Jawade-Q3_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [SOLAR-10B-Nector-DPO-Jawade-Q3_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [SOLAR-10B-Nector-DPO-Jawade-Q3_K_L.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [SOLAR-10B-Nector-DPO-Jawade-Q4_0.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SOLAR-10B-Nector-DPO-Jawade-Q4_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [SOLAR-10B-Nector-DPO-Jawade-Q4_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [SOLAR-10B-Nector-DPO-Jawade-Q5_0.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SOLAR-10B-Nector-DPO-Jawade-Q5_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [SOLAR-10B-Nector-DPO-Jawade-Q5_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [SOLAR-10B-Nector-DPO-Jawade-Q6_K.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [SOLAR-10B-Nector-DPO-Jawade-Q8_0.gguf](https://huggingface.co/tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF/blob/main/SOLAR-10B-Nector-DPO-Jawade-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF --include "SOLAR-10B-Nector-DPO-Jawade-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SOLAR-10B-Nector-DPO-Jawade-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/chesspythia-70m-GGUF | tensorblock | 2025-04-21T00:36:34Z | 12 | 0 | null | [
"gguf",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlabonne/chesspythia-70m",
"base_model:quantized:mlabonne/chesspythia-70m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T18:51:49Z | ---
license: apache-2.0
base_model: mlabonne/chesspythia-70m
tags:
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: results
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mlabonne/chesspythia-70m - GGUF
This repo contains GGUF format model files for [mlabonne/chesspythia-70m](https://huggingface.co/mlabonne/chesspythia-70m).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [chesspythia-70m-Q2_K.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q2_K.gguf) | Q2_K | 0.039 GB | smallest, significant quality loss - not recommended for most purposes |
| [chesspythia-70m-Q3_K_S.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q3_K_S.gguf) | Q3_K_S | 0.042 GB | very small, high quality loss |
| [chesspythia-70m-Q3_K_M.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q3_K_M.gguf) | Q3_K_M | 0.044 GB | very small, high quality loss |
| [chesspythia-70m-Q3_K_L.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q3_K_L.gguf) | Q3_K_L | 0.045 GB | small, substantial quality loss |
| [chesspythia-70m-Q4_0.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q4_0.gguf) | Q4_0 | 0.048 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chesspythia-70m-Q4_K_S.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q4_K_S.gguf) | Q4_K_S | 0.048 GB | small, greater quality loss |
| [chesspythia-70m-Q4_K_M.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q4_K_M.gguf) | Q4_K_M | 0.049 GB | medium, balanced quality - recommended |
| [chesspythia-70m-Q5_0.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q5_0.gguf) | Q5_0 | 0.054 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chesspythia-70m-Q5_K_S.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q5_K_S.gguf) | Q5_K_S | 0.054 GB | large, low quality loss - recommended |
| [chesspythia-70m-Q5_K_M.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q5_K_M.gguf) | Q5_K_M | 0.055 GB | large, very low quality loss - recommended |
| [chesspythia-70m-Q6_K.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q6_K.gguf) | Q6_K | 0.060 GB | very large, extremely low quality loss |
| [chesspythia-70m-Q8_0.gguf](https://huggingface.co/tensorblock/chesspythia-70m-GGUF/blob/main/chesspythia-70m-Q8_0.gguf) | Q8_0 | 0.077 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/chesspythia-70m-GGUF --include "chesspythia-70m-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/chesspythia-70m-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mistral-Pirate-7b-v0.3-GGUF | tensorblock | 2025-04-21T00:36:32Z | 25 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:phanerozoic/Mistral-Pirate-7b-v0.3",
"base_model:quantized:phanerozoic/Mistral-Pirate-7b-v0.3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T18:11:31Z | ---
license: cc-by-nc-4.0
language:
- en
tags:
- TensorBlock
- GGUF
base_model: phanerozoic/Mistral-Pirate-7b-v0.3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## phanerozoic/Mistral-Pirate-7b-v0.3 - GGUF
This repo contains GGUF format model files for [phanerozoic/Mistral-Pirate-7b-v0.3](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v0.3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-Pirate-7b-v0.3-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-Pirate-7b-v0.3-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-Pirate-7b-v0.3-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-Pirate-7b-v0.3-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-Pirate-7b-v0.3-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-Pirate-7b-v0.3-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-Pirate-7b-v0.3-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-Pirate-7b-v0.3-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-Pirate-7b-v0.3-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-Pirate-7b-v0.3-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-Pirate-7b-v0.3-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-Pirate-7b-v0.3-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-Pirate-7b-v0.3-GGUF/blob/main/Mistral-Pirate-7b-v0.3-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-Pirate-7b-v0.3-GGUF --include "Mistral-Pirate-7b-v0.3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-Pirate-7b-v0.3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF | tensorblock | 2025-04-21T00:36:28Z | 58 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:macadeliccc/laser-dolphin-mixtral-2x7b-dpo",
"base_model:quantized:macadeliccc/laser-dolphin-mixtral-2x7b-dpo",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T16:58:54Z | ---
license: apache-2.0
library_name: transformers
base_model: macadeliccc/laser-dolphin-mixtral-2x7b-dpo
tags:
- TensorBlock
- GGUF
model-index:
- name: laser-dolphin-mixtral-2x7b-dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## macadeliccc/laser-dolphin-mixtral-2x7b-dpo - GGUF
This repo contains GGUF format model files for [macadeliccc/laser-dolphin-mixtral-2x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [laser-dolphin-mixtral-2x7b-dpo-Q2_K.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q2_K.gguf) | Q2_K | 4.761 GB | smallest, significant quality loss - not recommended for most purposes |
| [laser-dolphin-mixtral-2x7b-dpo-Q3_K_S.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q3_K_S.gguf) | Q3_K_S | 5.588 GB | very small, high quality loss |
| [laser-dolphin-mixtral-2x7b-dpo-Q3_K_M.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q3_K_M.gguf) | Q3_K_M | 6.206 GB | very small, high quality loss |
| [laser-dolphin-mixtral-2x7b-dpo-Q3_K_L.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q3_K_L.gguf) | Q3_K_L | 6.730 GB | small, substantial quality loss |
| [laser-dolphin-mixtral-2x7b-dpo-Q4_0.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q4_0.gguf) | Q4_0 | 7.281 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [laser-dolphin-mixtral-2x7b-dpo-Q4_K_S.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q4_K_S.gguf) | Q4_K_S | 7.342 GB | small, greater quality loss |
| [laser-dolphin-mixtral-2x7b-dpo-Q4_K_M.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q4_K_M.gguf) | Q4_K_M | 7.783 GB | medium, balanced quality - recommended |
| [laser-dolphin-mixtral-2x7b-dpo-Q5_0.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q5_0.gguf) | Q5_0 | 8.874 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [laser-dolphin-mixtral-2x7b-dpo-Q5_K_S.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q5_K_S.gguf) | Q5_K_S | 8.874 GB | large, low quality loss - recommended |
| [laser-dolphin-mixtral-2x7b-dpo-Q5_K_M.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q5_K_M.gguf) | Q5_K_M | 9.133 GB | large, very low quality loss - recommended |
| [laser-dolphin-mixtral-2x7b-dpo-Q6_K.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q6_K.gguf) | Q6_K | 10.567 GB | very large, extremely low quality loss |
| [laser-dolphin-mixtral-2x7b-dpo-Q8_0.gguf](https://huggingface.co/tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF/blob/main/laser-dolphin-mixtral-2x7b-dpo-Q8_0.gguf) | Q8_0 | 13.686 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF --include "laser-dolphin-mixtral-2x7b-dpo-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/laser-dolphin-mixtral-2x7b-dpo-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF | tensorblock | 2025-04-21T00:36:20Z | 26 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:quantized:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T12:14:39Z | ---
language:
- fr
- it
- de
- es
- en
license: apache-2.0
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
inference:
parameters:
temperature: 0.5
widget:
- messages:
- role: user
content: What is your favorite condiment?
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mistralai/Mixtral-8x7B-Instruct-v0.1 - GGUF
This repo contains GGUF format model files for [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s> [INST] {system_prompt}
{prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mixtral-8x7B-Instruct-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mixtral-8x7B-Instruct-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [Mixtral-8x7B-Instruct-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [Mixtral-8x7B-Instruct-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [Mixtral-8x7B-Instruct-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mixtral-8x7B-Instruct-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [Mixtral-8x7B-Instruct-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [Mixtral-8x7B-Instruct-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mixtral-8x7B-Instruct-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [Mixtral-8x7B-Instruct-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [Mixtral-8x7B-Instruct-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [Mixtral-8x7B-Instruct-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF --include "Mixtral-8x7B-Instruct-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mixtral-8x7B-Instruct-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Rabbit-7B-DPO-Chat-GGUF | tensorblock | 2025-04-21T00:36:18Z | 27 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:viethq188/Rabbit-7B-DPO-Chat",
"base_model:quantized:viethq188/Rabbit-7B-DPO-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T11:52:49Z | ---
license: apache-2.0
base_model: viethq188/Rabbit-7B-DPO-Chat
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## viethq188/Rabbit-7B-DPO-Chat - GGUF
This repo contains GGUF format model files for [viethq188/Rabbit-7B-DPO-Chat](https://huggingface.co/viethq188/Rabbit-7B-DPO-Chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Rabbit-7B-DPO-Chat-Q2_K.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Rabbit-7B-DPO-Chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Rabbit-7B-DPO-Chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Rabbit-7B-DPO-Chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Rabbit-7B-DPO-Chat-Q4_0.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Rabbit-7B-DPO-Chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Rabbit-7B-DPO-Chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Rabbit-7B-DPO-Chat-Q5_0.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Rabbit-7B-DPO-Chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Rabbit-7B-DPO-Chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Rabbit-7B-DPO-Chat-Q6_K.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Rabbit-7B-DPO-Chat-Q8_0.gguf](https://huggingface.co/tensorblock/Rabbit-7B-DPO-Chat-GGUF/blob/main/Rabbit-7B-DPO-Chat-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Rabbit-7B-DPO-Chat-GGUF --include "Rabbit-7B-DPO-Chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Rabbit-7B-DPO-Chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/chat_gpt2_dpo-GGUF | tensorblock | 2025-04-21T00:36:11Z | 402 | 0 | null | [
"gguf",
"gpt2",
"dpo",
"trl",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:Intel/orca_dpo_pairs",
"base_model:Sharathhebbar24/chat_gpt2_dpo",
"base_model:quantized:Sharathhebbar24/chat_gpt2_dpo",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-23T11:48:38Z | ---
language:
- en
license: apache-2.0
tags:
- gpt2
- dpo
- trl
- TensorBlock
- GGUF
datasets:
- HuggingFaceH4/ultrachat_200k
- Intel/orca_dpo_pairs
pipeline_tag: text-generation
base_model: Sharathhebbar24/chat_gpt2_dpo
model-index:
- name: chat_gpt2_dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 23.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 31.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.26
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Sharathhebbar24/chat_gpt2_dpo - GGUF
This repo contains GGUF format model files for [Sharathhebbar24/chat_gpt2_dpo](https://huggingface.co/Sharathhebbar24/chat_gpt2_dpo).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [chat_gpt2_dpo-Q2_K.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q2_K.gguf) | Q2_K | 0.081 GB | smallest, significant quality loss - not recommended for most purposes |
| [chat_gpt2_dpo-Q3_K_S.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q3_K_S.gguf) | Q3_K_S | 0.090 GB | very small, high quality loss |
| [chat_gpt2_dpo-Q3_K_M.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q3_K_M.gguf) | Q3_K_M | 0.098 GB | very small, high quality loss |
| [chat_gpt2_dpo-Q3_K_L.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q3_K_L.gguf) | Q3_K_L | 0.102 GB | small, substantial quality loss |
| [chat_gpt2_dpo-Q4_0.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q4_0.gguf) | Q4_0 | 0.107 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chat_gpt2_dpo-Q4_K_S.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q4_K_S.gguf) | Q4_K_S | 0.107 GB | small, greater quality loss |
| [chat_gpt2_dpo-Q4_K_M.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q4_K_M.gguf) | Q4_K_M | 0.113 GB | medium, balanced quality - recommended |
| [chat_gpt2_dpo-Q5_0.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q5_0.gguf) | Q5_0 | 0.122 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chat_gpt2_dpo-Q5_K_S.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q5_K_S.gguf) | Q5_K_S | 0.122 GB | large, low quality loss - recommended |
| [chat_gpt2_dpo-Q5_K_M.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q5_K_M.gguf) | Q5_K_M | 0.127 GB | large, very low quality loss - recommended |
| [chat_gpt2_dpo-Q6_K.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q6_K.gguf) | Q6_K | 0.138 GB | very large, extremely low quality loss |
| [chat_gpt2_dpo-Q8_0.gguf](https://huggingface.co/tensorblock/chat_gpt2_dpo-GGUF/blob/main/chat_gpt2_dpo-Q8_0.gguf) | Q8_0 | 0.178 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/chat_gpt2_dpo-GGUF --include "chat_gpt2_dpo-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/chat_gpt2_dpo-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ELYZA-japanese-Llama-2-13b-GGUF | tensorblock | 2025-04-21T00:36:04Z | 37 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"ja",
"en",
"base_model:elyza/ELYZA-japanese-Llama-2-13b",
"base_model:quantized:elyza/ELYZA-japanese-Llama-2-13b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T11:11:46Z | ---
license: llama2
language:
- ja
- en
tags:
- TensorBlock
- GGUF
base_model: elyza/ELYZA-japanese-Llama-2-13b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## elyza/ELYZA-japanese-Llama-2-13b - GGUF
This repo contains GGUF format model files for [elyza/ELYZA-japanese-Llama-2-13b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ELYZA-japanese-Llama-2-13b-Q2_K.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [ELYZA-japanese-Llama-2-13b-Q3_K_S.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [ELYZA-japanese-Llama-2-13b-Q3_K_M.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [ELYZA-japanese-Llama-2-13b-Q3_K_L.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [ELYZA-japanese-Llama-2-13b-Q4_0.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ELYZA-japanese-Llama-2-13b-Q4_K_S.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [ELYZA-japanese-Llama-2-13b-Q4_K_M.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [ELYZA-japanese-Llama-2-13b-Q5_0.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ELYZA-japanese-Llama-2-13b-Q5_K_S.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [ELYZA-japanese-Llama-2-13b-Q5_K_M.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [ELYZA-japanese-Llama-2-13b-Q6_K.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [ELYZA-japanese-Llama-2-13b-Q8_0.gguf](https://huggingface.co/tensorblock/ELYZA-japanese-Llama-2-13b-GGUF/blob/main/ELYZA-japanese-Llama-2-13b-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ELYZA-japanese-Llama-2-13b-GGUF --include "ELYZA-japanese-Llama-2-13b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ELYZA-japanese-Llama-2-13b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mixtral_11Bx2_MoE_19B-GGUF | tensorblock | 2025-04-21T00:36:01Z | 36 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:cloudyu/Mixtral_11Bx2_MoE_19B",
"base_model:quantized:cloudyu/Mixtral_11Bx2_MoE_19B",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T10:54:11Z | ---
license: cc-by-nc-4.0
base_model: cloudyu/Mixtral_11Bx2_MoE_19B
tags:
- TensorBlock
- GGUF
model-index:
- name: Mixtral_11Bx2_MoE_19B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_11Bx2_MoE_19B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_11Bx2_MoE_19B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_11Bx2_MoE_19B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_11Bx2_MoE_19B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_11Bx2_MoE_19B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_11Bx2_MoE_19B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cloudyu/Mixtral_11Bx2_MoE_19B - GGUF
This repo contains GGUF format model files for [cloudyu/Mixtral_11Bx2_MoE_19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mixtral_11Bx2_MoE_19B-Q2_K.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q2_K.gguf) | Q2_K | 7.066 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mixtral_11Bx2_MoE_19B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q3_K_S.gguf) | Q3_K_S | 8.299 GB | very small, high quality loss |
| [Mixtral_11Bx2_MoE_19B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q3_K_M.gguf) | Q3_K_M | 9.227 GB | very small, high quality loss |
| [Mixtral_11Bx2_MoE_19B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q3_K_L.gguf) | Q3_K_L | 10.012 GB | small, substantial quality loss |
| [Mixtral_11Bx2_MoE_19B-Q4_0.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q4_0.gguf) | Q4_0 | 10.830 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mixtral_11Bx2_MoE_19B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q4_K_S.gguf) | Q4_K_S | 10.920 GB | small, greater quality loss |
| [Mixtral_11Bx2_MoE_19B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q4_K_M.gguf) | Q4_K_M | 11.583 GB | medium, balanced quality - recommended |
| [Mixtral_11Bx2_MoE_19B-Q5_0.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q5_0.gguf) | Q5_0 | 13.212 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mixtral_11Bx2_MoE_19B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q5_K_S.gguf) | Q5_K_S | 13.212 GB | large, low quality loss - recommended |
| [Mixtral_11Bx2_MoE_19B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q5_K_M.gguf) | Q5_K_M | 13.600 GB | large, very low quality loss - recommended |
| [Mixtral_11Bx2_MoE_19B-Q6_K.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q6_K.gguf) | Q6_K | 15.743 GB | very large, extremely low quality loss |
| [Mixtral_11Bx2_MoE_19B-Q8_0.gguf](https://huggingface.co/tensorblock/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/Mixtral_11Bx2_MoE_19B-Q8_0.gguf) | Q8_0 | 20.390 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mixtral_11Bx2_MoE_19B-GGUF --include "Mixtral_11Bx2_MoE_19B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mixtral_11Bx2_MoE_19B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Genstruct-7B-GGUF | tensorblock | 2025-04-21T00:35:57Z | 49 | 0 | transformers | [
"transformers",
"gguf",
"Mistral",
"instruct",
"finetune",
"synthetic",
"TensorBlock",
"GGUF",
"en",
"base_model:NousResearch/Genstruct-7B",
"base_model:quantized:NousResearch/Genstruct-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T09:44:14Z | ---
base_model: NousResearch/Genstruct-7B
tags:
- Mistral
- instruct
- finetune
- synthetic
- TensorBlock
- GGUF
license: apache-2.0
language:
- en
library_name: transformers
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## NousResearch/Genstruct-7B - GGUF
This repo contains GGUF format model files for [NousResearch/Genstruct-7B](https://huggingface.co/NousResearch/Genstruct-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Genstruct-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Genstruct-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Genstruct-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Genstruct-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Genstruct-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Genstruct-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Genstruct-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Genstruct-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Genstruct-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Genstruct-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Genstruct-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Genstruct-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Genstruct-7B-GGUF/blob/main/Genstruct-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Genstruct-7B-GGUF --include "Genstruct-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Genstruct-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/GetCode-slerp-GGUF | tensorblock | 2025-04-21T00:35:50Z | 28 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"codellama/CodeLlama-7b-Instruct-hf",
"Salesforce/codegen25-7b-multi",
"TensorBlock",
"GGUF",
"base_model:mavihsrr/GetCode-slerp",
"base_model:quantized:mavihsrr/GetCode-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T07:22:21Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- codellama/CodeLlama-7b-Instruct-hf
- Salesforce/codegen25-7b-multi
- TensorBlock
- GGUF
base_model: mavihsrr/GetCode-slerp
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mavihsrr/GetCode-slerp - GGUF
This repo contains GGUF format model files for [mavihsrr/GetCode-slerp](https://huggingface.co/mavihsrr/GetCode-slerp).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [GetCode-slerp-Q2_K.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [GetCode-slerp-Q3_K_S.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [GetCode-slerp-Q3_K_M.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [GetCode-slerp-Q3_K_L.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [GetCode-slerp-Q4_0.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [GetCode-slerp-Q4_K_S.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [GetCode-slerp-Q4_K_M.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [GetCode-slerp-Q5_0.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [GetCode-slerp-Q5_K_S.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [GetCode-slerp-Q5_K_M.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [GetCode-slerp-Q6_K.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [GetCode-slerp-Q8_0.gguf](https://huggingface.co/tensorblock/GetCode-slerp-GGUF/blob/main/GetCode-slerp-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/GetCode-slerp-GGUF --include "GetCode-slerp-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/GetCode-slerp-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Zyte-1B-GGUF | tensorblock | 2025-04-21T00:35:44Z | 28 | 0 | null | [
"gguf",
"slm",
"llama",
"tiny",
"tinyllama",
"TensorBlock",
"GGUF",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:venkycs/Zyte-1B",
"base_model:quantized:venkycs/Zyte-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T06:36:55Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
- bertscore
- bleu
tags:
- slm
- llama
- tiny
- tinyllama
- TensorBlock
- GGUF
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
base_model: venkycs/Zyte-1B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## venkycs/Zyte-1B - GGUF
This repo contains GGUF format model files for [venkycs/Zyte-1B](https://huggingface.co/venkycs/Zyte-1B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Zyte-1B-Q2_K.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q2_K.gguf) | Q2_K | 0.432 GB | smallest, significant quality loss - not recommended for most purposes |
| [Zyte-1B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q3_K_S.gguf) | Q3_K_S | 0.499 GB | very small, high quality loss |
| [Zyte-1B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q3_K_M.gguf) | Q3_K_M | 0.548 GB | very small, high quality loss |
| [Zyte-1B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [Zyte-1B-Q4_0.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Zyte-1B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q4_K_S.gguf) | Q4_K_S | 0.640 GB | small, greater quality loss |
| [Zyte-1B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q4_K_M.gguf) | Q4_K_M | 0.668 GB | medium, balanced quality - recommended |
| [Zyte-1B-Q5_0.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q5_0.gguf) | Q5_0 | 0.766 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Zyte-1B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q5_K_S.gguf) | Q5_K_S | 0.766 GB | large, low quality loss - recommended |
| [Zyte-1B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q5_K_M.gguf) | Q5_K_M | 0.782 GB | large, very low quality loss - recommended |
| [Zyte-1B-Q6_K.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q6_K.gguf) | Q6_K | 0.903 GB | very large, extremely low quality loss |
| [Zyte-1B-Q8_0.gguf](https://huggingface.co/tensorblock/Zyte-1B-GGUF/blob/main/Zyte-1B-Q8_0.gguf) | Q8_0 | 1.170 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Zyte-1B-GGUF --include "Zyte-1B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Zyte-1B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CapybaraMarcoroni-7B-GGUF | tensorblock | 2025-04-21T00:35:41Z | 38 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:AtAndDev/CapybaraMarcoroni-7B",
"base_model:quantized:AtAndDev/CapybaraMarcoroni-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T05:50:20Z | ---
base_model: AtAndDev/CapybaraMarcoroni-7B
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## AtAndDev/CapybaraMarcoroni-7B - GGUF
This repo contains GGUF format model files for [AtAndDev/CapybaraMarcoroni-7B](https://huggingface.co/AtAndDev/CapybaraMarcoroni-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CapybaraMarcoroni-7B-Q2_K.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [CapybaraMarcoroni-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [CapybaraMarcoroni-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [CapybaraMarcoroni-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [CapybaraMarcoroni-7B-Q4_0.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [CapybaraMarcoroni-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [CapybaraMarcoroni-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [CapybaraMarcoroni-7B-Q5_0.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [CapybaraMarcoroni-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [CapybaraMarcoroni-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [CapybaraMarcoroni-7B-Q6_K.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [CapybaraMarcoroni-7B-Q8_0.gguf](https://huggingface.co/tensorblock/CapybaraMarcoroni-7B-GGUF/blob/main/CapybaraMarcoroni-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CapybaraMarcoroni-7B-GGUF --include "CapybaraMarcoroni-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CapybaraMarcoroni-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/DareBeagel-2x7B-GGUF | tensorblock | 2025-04-21T00:35:38Z | 36 | 0 | null | [
"gguf",
"moe",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"mlabonne/NeuralDaredevil-7B",
"TensorBlock",
"GGUF",
"base_model:shadowml/DareBeagel-2x7B",
"base_model:quantized:shadowml/DareBeagel-2x7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T04:30:00Z | ---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralDaredevil-7B
- TensorBlock
- GGUF
base_model: shadowml/DareBeagel-2x7B
model-index:
- name: DareBeagel-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.09
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## shadowml/DareBeagel-2x7B - GGUF
This repo contains GGUF format model files for [shadowml/DareBeagel-2x7B](https://huggingface.co/shadowml/DareBeagel-2x7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [DareBeagel-2x7B-Q2_K.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q2_K.gguf) | Q2_K | 4.761 GB | smallest, significant quality loss - not recommended for most purposes |
| [DareBeagel-2x7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q3_K_S.gguf) | Q3_K_S | 5.588 GB | very small, high quality loss |
| [DareBeagel-2x7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q3_K_M.gguf) | Q3_K_M | 6.206 GB | very small, high quality loss |
| [DareBeagel-2x7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q3_K_L.gguf) | Q3_K_L | 6.730 GB | small, substantial quality loss |
| [DareBeagel-2x7B-Q4_0.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q4_0.gguf) | Q4_0 | 7.281 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [DareBeagel-2x7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q4_K_S.gguf) | Q4_K_S | 7.342 GB | small, greater quality loss |
| [DareBeagel-2x7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q4_K_M.gguf) | Q4_K_M | 7.783 GB | medium, balanced quality - recommended |
| [DareBeagel-2x7B-Q5_0.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q5_0.gguf) | Q5_0 | 8.874 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [DareBeagel-2x7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q5_K_S.gguf) | Q5_K_S | 8.874 GB | large, low quality loss - recommended |
| [DareBeagel-2x7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q5_K_M.gguf) | Q5_K_M | 9.133 GB | large, very low quality loss - recommended |
| [DareBeagel-2x7B-Q6_K.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q6_K.gguf) | Q6_K | 10.567 GB | very large, extremely low quality loss |
| [DareBeagel-2x7B-Q8_0.gguf](https://huggingface.co/tensorblock/DareBeagel-2x7B-GGUF/blob/main/DareBeagel-2x7B-Q8_0.gguf) | Q8_0 | 13.686 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/DareBeagel-2x7B-GGUF --include "DareBeagel-2x7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/DareBeagel-2x7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF | tensorblock | 2025-04-21T00:35:31Z | 42 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"base_model:AIdenU/Mistral-7b-ko-Y24_v0.1",
"base_model:quantized:AIdenU/Mistral-7b-ko-Y24_v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-23T02:57:00Z | ---
language:
- ko
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: AIdenU/Mistral-7b-ko-Y24_v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## AIdenU/Mistral-7b-ko-Y24_v0.1 - GGUF
This repo contains GGUF format model files for [AIdenU/Mistral-7b-ko-Y24_v0.1](https://huggingface.co/AIdenU/Mistral-7b-ko-Y24_v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7b-ko-Y24_v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7b-ko-Y24_v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7b-ko-Y24_v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7b-ko-Y24_v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7b-ko-Y24_v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7b-ko-Y24_v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7b-ko-Y24_v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7b-ko-Y24_v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7b-ko-Y24_v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7b-ko-Y24_v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7b-ko-Y24_v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7b-ko-Y24_v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF/blob/main/Mistral-7b-ko-Y24_v0.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF --include "Mistral-7b-ko-Y24_v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-7b-ko-Y24_v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF | tensorblock | 2025-04-21T00:35:24Z | 41 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:databricks/databricks-dolly-15k",
"dataset:lucasmccabe-lmi/CodeAlpaca-20k",
"base_model:HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca",
"base_model:quantized:HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T01:42:31Z | ---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
- lucasmccabe-lmi/CodeAlpaca-20k
base_model: HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca - GGUF
This repo contains GGUF format model files for [HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca](https://huggingface.co/HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q2_K.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q2_K.gguf) | Q2_K | 2.337 GB | smallest, significant quality loss - not recommended for most purposes |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q3_K_S.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q3_K_S.gguf) | Q3_K_S | 2.709 GB | very small, high quality loss |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q3_K_M.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q3_K_M.gguf) | Q3_K_M | 2.993 GB | very small, high quality loss |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q3_K_L.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q3_K_L.gguf) | Q3_K_L | 3.237 GB | small, substantial quality loss |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q4_0.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q4_0.gguf) | Q4_0 | 3.479 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q4_K_S.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q4_K_S.gguf) | Q4_K_S | 3.503 GB | small, greater quality loss |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q4_K_M.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q4_K_M.gguf) | Q4_K_M | 3.674 GB | medium, balanced quality - recommended |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q5_0.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q5_0.gguf) | Q5_0 | 4.204 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q5_K_S.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q5_K_S.gguf) | Q5_K_S | 4.204 GB | large, low quality loss - recommended |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q5_K_M.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q5_K_M.gguf) | Q5_K_M | 4.304 GB | large, very low quality loss - recommended |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q6_K.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q6_K.gguf) | Q6_K | 4.974 GB | very large, extremely low quality loss |
| [Instruct_Yi-6B_Dolly_CodeAlpaca-Q8_0.gguf](https://huggingface.co/tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF/blob/main/Instruct_Yi-6B_Dolly_CodeAlpaca-Q8_0.gguf) | Q8_0 | 6.442 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF --include "Instruct_Yi-6B_Dolly_CodeAlpaca-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Instruct_Yi-6B_Dolly_CodeAlpaca-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/WildWest-Variant3-7B-GGUF | tensorblock | 2025-04-21T00:35:23Z | 37 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:BarryFutureman/WildWest-Variant3-7B",
"base_model:quantized:BarryFutureman/WildWest-Variant3-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-23T01:27:47Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- merge
- TensorBlock
- GGUF
base_model: BarryFutureman/WildWest-Variant3-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## BarryFutureman/WildWest-Variant3-7B - GGUF
This repo contains GGUF format model files for [BarryFutureman/WildWest-Variant3-7B](https://huggingface.co/BarryFutureman/WildWest-Variant3-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [WildWest-Variant3-7B-Q2_K.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [WildWest-Variant3-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [WildWest-Variant3-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [WildWest-Variant3-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [WildWest-Variant3-7B-Q4_0.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [WildWest-Variant3-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [WildWest-Variant3-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [WildWest-Variant3-7B-Q5_0.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [WildWest-Variant3-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [WildWest-Variant3-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [WildWest-Variant3-7B-Q6_K.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [WildWest-Variant3-7B-Q8_0.gguf](https://huggingface.co/tensorblock/WildWest-Variant3-7B-GGUF/blob/main/WildWest-Variant3-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/WildWest-Variant3-7B-GGUF --include "WildWest-Variant3-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/WildWest-Variant3-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Severus-7B-GGUF | tensorblock | 2025-04-21T00:35:21Z | 42 | 0 | null | [
"gguf",
"samir-fama/FernandoGPT-v1",
"FelixChao/NinjaDolphin-7B",
"TensorBlock",
"GGUF",
"base_model:FelixChao/Severus-7B",
"base_model:quantized:FelixChao/Severus-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T01:16:55Z | ---
license: apache-2.0
tags:
- samir-fama/FernandoGPT-v1
- FelixChao/NinjaDolphin-7B
- TensorBlock
- GGUF
base_model: FelixChao/Severus-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## FelixChao/Severus-7B - GGUF
This repo contains GGUF format model files for [FelixChao/Severus-7B](https://huggingface.co/FelixChao/Severus-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Severus-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Severus-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Severus-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Severus-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Severus-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Severus-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Severus-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Severus-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Severus-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Severus-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Severus-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Severus-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Severus-7B-GGUF/blob/main/Severus-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Severus-7B-GGUF --include "Severus-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Severus-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/blossom-v4-mistral-7b-GGUF | tensorblock | 2025-04-21T00:35:20Z | 41 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"zh",
"en",
"dataset:Azure99/blossom-chat-v2",
"dataset:Azure99/blossom-math-v3",
"dataset:Azure99/blossom-wizard-v2",
"dataset:Azure99/blossom-orca-v2",
"base_model:Azure99/blossom-v4-mistral-7b",
"base_model:quantized:Azure99/blossom-v4-mistral-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-23T00:49:17Z | ---
license: apache-2.0
datasets:
- Azure99/blossom-chat-v2
- Azure99/blossom-math-v3
- Azure99/blossom-wizard-v2
- Azure99/blossom-orca-v2
language:
- zh
- en
base_model: Azure99/blossom-v4-mistral-7b
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Azure99/blossom-v4-mistral-7b - GGUF
This repo contains GGUF format model files for [Azure99/blossom-v4-mistral-7b](https://huggingface.co/Azure99/blossom-v4-mistral-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [blossom-v4-mistral-7b-Q2_K.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [blossom-v4-mistral-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [blossom-v4-mistral-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [blossom-v4-mistral-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [blossom-v4-mistral-7b-Q4_0.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [blossom-v4-mistral-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [blossom-v4-mistral-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [blossom-v4-mistral-7b-Q5_0.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [blossom-v4-mistral-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [blossom-v4-mistral-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [blossom-v4-mistral-7b-Q6_K.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [blossom-v4-mistral-7b-Q8_0.gguf](https://huggingface.co/tensorblock/blossom-v4-mistral-7b-GGUF/blob/main/blossom-v4-mistral-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/blossom-v4-mistral-7b-GGUF --include "blossom-v4-mistral-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/blossom-v4-mistral-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF | tensorblock | 2025-04-21T00:35:18Z | 38 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:Intel/orca_dpo_pairs",
"base_model:bhavinjawade/SOLAR-10B-OrcaDPO-Jawade",
"base_model:quantized:bhavinjawade/SOLAR-10B-OrcaDPO-Jawade",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-23T00:05:54Z | ---
license: mit
datasets:
- Intel/orca_dpo_pairs
tags:
- TensorBlock
- GGUF
base_model: bhavinjawade/SOLAR-10B-OrcaDPO-Jawade
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## bhavinjawade/SOLAR-10B-OrcaDPO-Jawade - GGUF
This repo contains GGUF format model files for [bhavinjawade/SOLAR-10B-OrcaDPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-OrcaDPO-Jawade).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SOLAR-10B-OrcaDPO-Jawade-Q2_K.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [SOLAR-10B-OrcaDPO-Jawade-Q3_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [SOLAR-10B-OrcaDPO-Jawade-Q3_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [SOLAR-10B-OrcaDPO-Jawade-Q3_K_L.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [SOLAR-10B-OrcaDPO-Jawade-Q4_0.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SOLAR-10B-OrcaDPO-Jawade-Q4_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [SOLAR-10B-OrcaDPO-Jawade-Q4_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [SOLAR-10B-OrcaDPO-Jawade-Q5_0.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SOLAR-10B-OrcaDPO-Jawade-Q5_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [SOLAR-10B-OrcaDPO-Jawade-Q5_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [SOLAR-10B-OrcaDPO-Jawade-Q6_K.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [SOLAR-10B-OrcaDPO-Jawade-Q8_0.gguf](https://huggingface.co/tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF/blob/main/SOLAR-10B-OrcaDPO-Jawade-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF --include "SOLAR-10B-OrcaDPO-Jawade-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SOLAR-10B-OrcaDPO-Jawade-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Pallas-0.5-LASER-0.5-GGUF | tensorblock | 2025-04-21T00:35:13Z | 40 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Mihaiii/Pallas-0.5-LASER-0.5",
"base_model:quantized:Mihaiii/Pallas-0.5-LASER-0.5",
"license:other",
"region:us"
] | null | 2024-12-22T22:07:00Z | ---
base_model: Mihaiii/Pallas-0.5-LASER-0.5
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Mihaiii/Pallas-0.5-LASER-0.5 - GGUF
This repo contains GGUF format model files for [Mihaiii/Pallas-0.5-LASER-0.5](https://huggingface.co/Mihaiii/Pallas-0.5-LASER-0.5).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Pallas-0.5-LASER-0.5-Q2_K.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes |
| [Pallas-0.5-LASER-0.5-Q3_K_S.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss |
| [Pallas-0.5-LASER-0.5-Q3_K_M.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss |
| [Pallas-0.5-LASER-0.5-Q3_K_L.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss |
| [Pallas-0.5-LASER-0.5-Q4_0.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Pallas-0.5-LASER-0.5-Q4_K_S.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss |
| [Pallas-0.5-LASER-0.5-Q4_K_M.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended |
| [Pallas-0.5-LASER-0.5-Q5_0.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Pallas-0.5-LASER-0.5-Q5_K_S.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended |
| [Pallas-0.5-LASER-0.5-Q5_K_M.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended |
| [Pallas-0.5-LASER-0.5-Q6_K.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss |
| [Pallas-0.5-LASER-0.5-Q8_0.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.5-GGUF/blob/main/Pallas-0.5-LASER-0.5-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Pallas-0.5-LASER-0.5-GGUF --include "Pallas-0.5-LASER-0.5-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Pallas-0.5-LASER-0.5-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF | tensorblock | 2025-04-21T00:35:08Z | 65 | 0 | null | [
"gguf",
"llama2",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"base_model:AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1",
"base_model:quantized:AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-22T18:27:12Z | ---
language:
- ko
pipeline_tag: text-generation
tags:
- llama2
- TensorBlock
- GGUF
base_model: AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1 - GGUF
This repo contains GGUF format model files for [AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1](https://huggingface.co/AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [LLAMA-2-13b-ko-Y24-DPO_v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF/blob/main/LLAMA-2-13b-ko-Y24-DPO_v0.1-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF --include "LLAMA-2-13b-ko-Y24-DPO_v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/LLAMA-2-13b-ko-Y24-DPO_v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Orca-Hermes-7B-slerp-GGUF | tensorblock | 2025-04-21T00:35:06Z | 53 | 0 | null | [
"gguf",
"merge",
"mergekit",
"Open-Orca/Mistral-7B-OpenOrca",
"teknium/OpenHermes-2.5-Mistral-7B",
"TensorBlock",
"GGUF",
"base_model:cris177/Orca-Hermes-7B-slerp",
"base_model:quantized:cris177/Orca-Hermes-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-22T17:48:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- Open-Orca/Mistral-7B-OpenOrca
- teknium/OpenHermes-2.5-Mistral-7B
- TensorBlock
- GGUF
base_model: cris177/Orca-Hermes-7B-slerp
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cris177/Orca-Hermes-7B-slerp - GGUF
This repo contains GGUF format model files for [cris177/Orca-Hermes-7B-slerp](https://huggingface.co/cris177/Orca-Hermes-7B-slerp).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Orca-Hermes-7B-slerp-Q2_K.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Orca-Hermes-7B-slerp-Q3_K_S.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Orca-Hermes-7B-slerp-Q3_K_M.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Orca-Hermes-7B-slerp-Q3_K_L.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Orca-Hermes-7B-slerp-Q4_0.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Orca-Hermes-7B-slerp-Q4_K_S.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Orca-Hermes-7B-slerp-Q4_K_M.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Orca-Hermes-7B-slerp-Q5_0.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Orca-Hermes-7B-slerp-Q5_K_S.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Orca-Hermes-7B-slerp-Q5_K_M.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Orca-Hermes-7B-slerp-Q6_K.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Orca-Hermes-7B-slerp-Q8_0.gguf](https://huggingface.co/tensorblock/Orca-Hermes-7B-slerp-GGUF/blob/main/Orca-Hermes-7B-slerp-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Orca-Hermes-7B-slerp-GGUF --include "Orca-Hermes-7B-slerp-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Orca-Hermes-7B-slerp-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TeenyTinyLlama-460m-Chat-GGUF | tensorblock | 2025-04-21T00:35:02Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"TensorBlock",
"GGUF",
"text-generation",
"pt",
"dataset:nicholasKluge/instruct-aira-dataset-v2",
"base_model:nicholasKluge/TeenyTinyLlama-460m-Chat",
"base_model:quantized:nicholasKluge/TeenyTinyLlama-460m-Chat",
"license:apache-2.0",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-22T16:16:46Z | ---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset-v2
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
- TensorBlock
- GGUF
widget:
- text: <s><instruction>Cite algumas bandas de rock famosas da dΓ©cada de 1960.</instruction>
example_title: Exemplo
- text: <s><instruction>Quantos planetas existem no sistema solar?</instruction>
example_title: Exemplo
- text: <s><instruction>Qual Γ© o futuro do ser humano?</instruction>
example_title: Exemplo
- text: <s><instruction>Qual o sentido da vida?</instruction>
example_title: Exemplo
- text: <s><instruction>Como imprimir hello world em python?</instruction>
example_title: Exemplo
- text: <s><instruction>Invente uma histΓ³ria sobre um encanador com poderes mΓ‘gicos.</instruction>
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 2530
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
base_model: nicholasKluge/TeenyTinyLlama-460m-Chat
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## nicholasKluge/TeenyTinyLlama-460m-Chat - GGUF
This repo contains GGUF format model files for [nicholasKluge/TeenyTinyLlama-460m-Chat](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-Chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TeenyTinyLlama-460m-Chat-Q2_K.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q2_K.gguf) | Q2_K | 0.186 GB | smallest, significant quality loss - not recommended for most purposes |
| [TeenyTinyLlama-460m-Chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q3_K_S.gguf) | Q3_K_S | 0.215 GB | very small, high quality loss |
| [TeenyTinyLlama-460m-Chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q3_K_M.gguf) | Q3_K_M | 0.236 GB | very small, high quality loss |
| [TeenyTinyLlama-460m-Chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q3_K_L.gguf) | Q3_K_L | 0.254 GB | small, substantial quality loss |
| [TeenyTinyLlama-460m-Chat-Q4_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q4_0.gguf) | Q4_0 | 0.273 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TeenyTinyLlama-460m-Chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q4_K_S.gguf) | Q4_K_S | 0.275 GB | small, greater quality loss |
| [TeenyTinyLlama-460m-Chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q4_K_M.gguf) | Q4_K_M | 0.289 GB | medium, balanced quality - recommended |
| [TeenyTinyLlama-460m-Chat-Q5_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q5_0.gguf) | Q5_0 | 0.327 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TeenyTinyLlama-460m-Chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q5_K_S.gguf) | Q5_K_S | 0.327 GB | large, low quality loss - recommended |
| [TeenyTinyLlama-460m-Chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q5_K_M.gguf) | Q5_K_M | 0.336 GB | large, very low quality loss - recommended |
| [TeenyTinyLlama-460m-Chat-Q6_K.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q6_K.gguf) | Q6_K | 0.385 GB | very large, extremely low quality loss |
| [TeenyTinyLlama-460m-Chat-Q8_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-460m-Chat-GGUF/blob/main/TeenyTinyLlama-460m-Chat-Q8_0.gguf) | Q8_0 | 0.498 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TeenyTinyLlama-460m-Chat-GGUF --include "TeenyTinyLlama-460m-Chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TeenyTinyLlama-460m-Chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/NeuralPizza-7B-V0.1-GGUF | tensorblock | 2025-04-21T00:34:59Z | 37 | 0 | Transformers | [
"Transformers",
"gguf",
"transformers",
"fine-tuned",
"language-modeling",
"direct-preference-optimization",
"TensorBlock",
"GGUF",
"dataset:Intel/orca_dpo_pairs",
"base_model:RatanRohith/NeuralPizza-7B-V0.1",
"base_model:quantized:RatanRohith/NeuralPizza-7B-V0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-22T11:23:42Z | ---
library_name: Transformers
tags:
- transformers
- fine-tuned
- language-modeling
- direct-preference-optimization
- TensorBlock
- GGUF
datasets:
- Intel/orca_dpo_pairs
license: apache-2.0
base_model: RatanRohith/NeuralPizza-7B-V0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## RatanRohith/NeuralPizza-7B-V0.1 - GGUF
This repo contains GGUF format model files for [RatanRohith/NeuralPizza-7B-V0.1](https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NeuralPizza-7B-V0.1-Q2_K.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [NeuralPizza-7B-V0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [NeuralPizza-7B-V0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [NeuralPizza-7B-V0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [NeuralPizza-7B-V0.1-Q4_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NeuralPizza-7B-V0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [NeuralPizza-7B-V0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [NeuralPizza-7B-V0.1-Q5_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NeuralPizza-7B-V0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [NeuralPizza-7B-V0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [NeuralPizza-7B-V0.1-Q6_K.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [NeuralPizza-7B-V0.1-Q8_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.1-GGUF/blob/main/NeuralPizza-7B-V0.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/NeuralPizza-7B-V0.1-GGUF --include "NeuralPizza-7B-V0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/NeuralPizza-7B-V0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenMistral-MoE-GGUF | tensorblock | 2025-04-21T00:34:50Z | 69 | 0 | null | [
"gguf",
"MoE",
"TensorBlock",
"GGUF",
"base_model:yashmarathe/OpenMistral-MoE",
"base_model:quantized:yashmarathe/OpenMistral-MoE",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-22T06:04:30Z | ---
tags:
- MoE
- TensorBlock
- GGUF
base_model: Yash21/OpenMistral-MoE
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Yash21/OpenMistral-MoE - GGUF
This repo contains GGUF format model files for [Yash21/OpenMistral-MoE](https://huggingface.co/Yash21/OpenMistral-MoE).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OpenMistral-MoE-Q2_K.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q2_K.gguf) | Q2_K | 8.843 GB | smallest, significant quality loss - not recommended for most purposes |
| [OpenMistral-MoE-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q3_K_S.gguf) | Q3_K_S | 10.433 GB | very small, high quality loss |
| [OpenMistral-MoE-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q3_K_M.gguf) | Q3_K_M | 11.580 GB | very small, high quality loss |
| [OpenMistral-MoE-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q3_K_L.gguf) | Q3_K_L | 12.544 GB | small, substantial quality loss |
| [OpenMistral-MoE-Q4_0.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q4_0.gguf) | Q4_0 | 13.624 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OpenMistral-MoE-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q4_K_S.gguf) | Q4_K_S | 13.743 GB | small, greater quality loss |
| [OpenMistral-MoE-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q4_K_M.gguf) | Q4_K_M | 14.610 GB | medium, balanced quality - recommended |
| [OpenMistral-MoE-Q5_0.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q5_0.gguf) | Q5_0 | 16.626 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OpenMistral-MoE-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q5_K_S.gguf) | Q5_K_S | 16.626 GB | large, low quality loss - recommended |
| [OpenMistral-MoE-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q5_K_M.gguf) | Q5_K_M | 17.134 GB | large, very low quality loss - recommended |
| [OpenMistral-MoE-Q6_K.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q6_K.gguf) | Q6_K | 19.817 GB | very large, extremely low quality loss |
| [OpenMistral-MoE-Q8_0.gguf](https://huggingface.co/tensorblock/OpenMistral-MoE-GGUF/blob/main/OpenMistral-MoE-Q8_0.gguf) | Q8_0 | 25.666 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenMistral-MoE-GGUF --include "OpenMistral-MoE-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenMistral-MoE-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Psyfighter2-Orca2-13B-ties-GGUF | tensorblock | 2025-04-21T00:34:46Z | 82 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"microsoft/Orca-2-13b",
"KoboldAI/LLaMA2-13B-Psyfighter2",
"TensorBlock",
"GGUF",
"base_model:tuantran1632001/Psyfighter2-Orca2-13B-ties",
"base_model:quantized:tuantran1632001/Psyfighter2-Orca2-13B-ties",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-22T03:32:21Z | ---
license: other
tags:
- merge
- mergekit
- lazymergekit
- microsoft/Orca-2-13b
- KoboldAI/LLaMA2-13B-Psyfighter2
- TensorBlock
- GGUF
base_model: tuantran1632001/Psyfighter2-Orca2-13B-ties
license_name: microsoft-research-license
model-index:
- name: Psyfighter2-Orca2-13B-ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.4
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## tuantran1632001/Psyfighter2-Orca2-13B-ties - GGUF
This repo contains GGUF format model files for [tuantran1632001/Psyfighter2-Orca2-13B-ties](https://huggingface.co/tuantran1632001/Psyfighter2-Orca2-13B-ties).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Psyfighter2-Orca2-13B-ties-Q2_K.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [Psyfighter2-Orca2-13B-ties-Q3_K_S.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [Psyfighter2-Orca2-13B-ties-Q3_K_M.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [Psyfighter2-Orca2-13B-ties-Q3_K_L.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [Psyfighter2-Orca2-13B-ties-Q4_0.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Psyfighter2-Orca2-13B-ties-Q4_K_S.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [Psyfighter2-Orca2-13B-ties-Q4_K_M.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [Psyfighter2-Orca2-13B-ties-Q5_0.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Psyfighter2-Orca2-13B-ties-Q5_K_S.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [Psyfighter2-Orca2-13B-ties-Q5_K_M.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [Psyfighter2-Orca2-13B-ties-Q6_K.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [Psyfighter2-Orca2-13B-ties-Q8_0.gguf](https://huggingface.co/tensorblock/Psyfighter2-Orca2-13B-ties-GGUF/blob/main/Psyfighter2-Orca2-13B-ties-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Psyfighter2-Orca2-13B-ties-GGUF --include "Psyfighter2-Orca2-13B-ties-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Psyfighter2-Orca2-13B-ties-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/StopCarbon-10.7B-v3-GGUF | tensorblock | 2025-04-21T00:34:45Z | 27 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"en",
"base_model:kekmodel/StopCarbon-10.7B-v3",
"base_model:quantized:kekmodel/StopCarbon-10.7B-v3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-22T02:36:22Z | ---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
- TensorBlock
- GGUF
base_model: kekmodel/StopCarbon-10.7B-v3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kekmodel/StopCarbon-10.7B-v3 - GGUF
This repo contains GGUF format model files for [kekmodel/StopCarbon-10.7B-v3](https://huggingface.co/kekmodel/StopCarbon-10.7B-v3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [StopCarbon-10.7B-v3-Q2_K.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [StopCarbon-10.7B-v3-Q3_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [StopCarbon-10.7B-v3-Q3_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [StopCarbon-10.7B-v3-Q3_K_L.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [StopCarbon-10.7B-v3-Q4_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [StopCarbon-10.7B-v3-Q4_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [StopCarbon-10.7B-v3-Q4_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [StopCarbon-10.7B-v3-Q5_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [StopCarbon-10.7B-v3-Q5_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [StopCarbon-10.7B-v3-Q5_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [StopCarbon-10.7B-v3-Q6_K.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [StopCarbon-10.7B-v3-Q8_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v3-GGUF/blob/main/StopCarbon-10.7B-v3-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/StopCarbon-10.7B-v3-GGUF --include "StopCarbon-10.7B-v3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/StopCarbon-10.7B-v3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TenyxChat-8x7B-v1-GGUF | tensorblock | 2025-04-21T00:34:43Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"TensorBlock",
"GGUF",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:tenyx/TenyxChat-8x7B-v1",
"base_model:quantized:tenyx/TenyxChat-8x7B-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-22T02:30:57Z | ---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
- TensorBlock
- GGUF
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: tenyx/TenyxChat-8x7B-v1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## tenyx/TenyxChat-8x7B-v1 - GGUF
This repo contains GGUF format model files for [tenyx/TenyxChat-8x7B-v1](https://huggingface.co/tenyx/TenyxChat-8x7B-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST]{system_prompt}[/INST][INST]{prompt}[/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TenyxChat-8x7B-v1-Q2_K.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [TenyxChat-8x7B-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [TenyxChat-8x7B-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [TenyxChat-8x7B-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [TenyxChat-8x7B-v1-Q4_0.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TenyxChat-8x7B-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [TenyxChat-8x7B-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [TenyxChat-8x7B-v1-Q5_0.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TenyxChat-8x7B-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [TenyxChat-8x7B-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [TenyxChat-8x7B-v1-Q6_K.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [TenyxChat-8x7B-v1-Q8_0.gguf](https://huggingface.co/tensorblock/TenyxChat-8x7B-v1-GGUF/blob/main/TenyxChat-8x7B-v1-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TenyxChat-8x7B-v1-GGUF --include "TenyxChat-8x7B-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TenyxChat-8x7B-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Monsoon-7B-exp-1-GGUF | tensorblock | 2025-04-21T00:34:38Z | 64 | 0 | null | [
"gguf",
"nlp",
"chinese",
"mistral",
"mixtral",
"traditional_chinese",
"merge",
"mergekit",
"MediaTek-Research/Breeze-7B-Instruct-v0_1",
"SanjiWatsuki/Silicon-Maid-7B",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"base_model:yuuko-eth/Monsoon-7B-exp-1",
"base_model:quantized:yuuko-eth/Monsoon-7B-exp-1",
"license:unknown",
"region:us"
] | text-generation | 2024-12-21T23:28:43Z | ---
inference: false
language:
- zh
- en
license: unknown
model_name: Monsoon-7B-exp-1
pipeline_tag: text-generation
prompt_template: <s> SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST]
tags:
- nlp
- chinese
- mistral
- mixtral
- traditional_chinese
- merge
- mergekit
- MediaTek-Research/Breeze-7B-Instruct-v0_1
- SanjiWatsuki/Silicon-Maid-7B
- TensorBlock
- GGUF
base_model: yuuko-eth/Monsoon-7B-exp-1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## yuuko-eth/Monsoon-7B-exp-1 - GGUF
This repo contains GGUF format model files for [yuuko-eth/Monsoon-7B-exp-1](https://huggingface.co/yuuko-eth/Monsoon-7B-exp-1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Monsoon-7B-exp-1-Q2_K.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q2_K.gguf) | Q2_K | 2.860 GB | smallest, significant quality loss - not recommended for most purposes |
| [Monsoon-7B-exp-1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q3_K_S.gguf) | Q3_K_S | 3.318 GB | very small, high quality loss |
| [Monsoon-7B-exp-1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q3_K_M.gguf) | Q3_K_M | 3.673 GB | very small, high quality loss |
| [Monsoon-7B-exp-1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q3_K_L.gguf) | Q3_K_L | 3.976 GB | small, substantial quality loss |
| [Monsoon-7B-exp-1-Q4_0.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q4_0.gguf) | Q4_0 | 4.279 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Monsoon-7B-exp-1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q4_K_S.gguf) | Q4_K_S | 4.310 GB | small, greater quality loss |
| [Monsoon-7B-exp-1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q4_K_M.gguf) | Q4_K_M | 4.538 GB | medium, balanced quality - recommended |
| [Monsoon-7B-exp-1-Q5_0.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q5_0.gguf) | Q5_0 | 5.183 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Monsoon-7B-exp-1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q5_K_S.gguf) | Q5_K_S | 5.183 GB | large, low quality loss - recommended |
| [Monsoon-7B-exp-1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q5_K_M.gguf) | Q5_K_M | 5.317 GB | large, very low quality loss - recommended |
| [Monsoon-7B-exp-1-Q6_K.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
| [Monsoon-7B-exp-1-Q8_0.gguf](https://huggingface.co/tensorblock/Monsoon-7B-exp-1-GGUF/blob/main/Monsoon-7B-exp-1-Q8_0.gguf) | Q8_0 | 7.957 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Monsoon-7B-exp-1-GGUF --include "Monsoon-7B-exp-1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Monsoon-7B-exp-1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/DareBeagle-7B-GGUF | tensorblock | 2025-04-21T00:34:36Z | 47 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"mlabonne/NeuralDaredevil-7B",
"TensorBlock",
"GGUF",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:flemmingmiguel/DareBeagle-7B",
"base_model:quantized:flemmingmiguel/DareBeagle-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T22:47:30Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralDaredevil-7B
- TensorBlock
- GGUF
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
base_model: flemmingmiguel/DareBeagle-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## flemmingmiguel/DareBeagle-7B - GGUF
This repo contains GGUF format model files for [flemmingmiguel/DareBeagle-7B](https://huggingface.co/flemmingmiguel/DareBeagle-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [DareBeagle-7B-Q2_K.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [DareBeagle-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [DareBeagle-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [DareBeagle-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [DareBeagle-7B-Q4_0.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [DareBeagle-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [DareBeagle-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [DareBeagle-7B-Q5_0.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [DareBeagle-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [DareBeagle-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [DareBeagle-7B-Q6_K.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [DareBeagle-7B-Q8_0.gguf](https://huggingface.co/tensorblock/DareBeagle-7B-GGUF/blob/main/DareBeagle-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/DareBeagle-7B-GGUF --include "DareBeagle-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/DareBeagle-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/vinallama-7b-chat-GGUF | tensorblock | 2025-04-21T00:34:33Z | 32 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"vi",
"base_model:vilm/vinallama-7b-chat",
"base_model:quantized:vilm/vinallama-7b-chat",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T21:36:12Z | ---
language:
- vi
license: llama2
base_model: vilm/vinallama-7b-chat
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## vilm/vinallama-7b-chat - GGUF
This repo contains GGUF format model files for [vilm/vinallama-7b-chat](https://huggingface.co/vilm/vinallama-7b-chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [vinallama-7b-chat-Q2_K.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q2_K.gguf) | Q2_K | 2.600 GB | smallest, significant quality loss - not recommended for most purposes |
| [vinallama-7b-chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q3_K_S.gguf) | Q3_K_S | 3.022 GB | very small, high quality loss |
| [vinallama-7b-chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q3_K_M.gguf) | Q3_K_M | 3.372 GB | very small, high quality loss |
| [vinallama-7b-chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q3_K_L.gguf) | Q3_K_L | 3.671 GB | small, substantial quality loss |
| [vinallama-7b-chat-Q4_0.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q4_0.gguf) | Q4_0 | 3.907 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vinallama-7b-chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q4_K_S.gguf) | Q4_K_S | 3.938 GB | small, greater quality loss |
| [vinallama-7b-chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q4_K_M.gguf) | Q4_K_M | 4.162 GB | medium, balanced quality - recommended |
| [vinallama-7b-chat-Q5_0.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q5_0.gguf) | Q5_0 | 4.740 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vinallama-7b-chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q5_K_S.gguf) | Q5_K_S | 4.740 GB | large, low quality loss - recommended |
| [vinallama-7b-chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q5_K_M.gguf) | Q5_K_M | 4.872 GB | large, very low quality loss - recommended |
| [vinallama-7b-chat-Q6_K.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q6_K.gguf) | Q6_K | 5.626 GB | very large, extremely low quality loss |
| [vinallama-7b-chat-Q8_0.gguf](https://huggingface.co/tensorblock/vinallama-7b-chat-GGUF/blob/main/vinallama-7b-chat-Q8_0.gguf) | Q8_0 | 7.286 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/vinallama-7b-chat-GGUF --include "vinallama-7b-chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/vinallama-7b-chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/HelpingAI-Lite-2x1B-GGUF | tensorblock | 2025-04-21T00:34:32Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"HelpingAI",
"coder",
"lite",
"Fine-tuned",
"moe",
"nlp",
"TensorBlock",
"GGUF",
"en",
"base_model:OEvortex/HelpingAI-Lite-2x1B",
"base_model:quantized:OEvortex/HelpingAI-Lite-2x1B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T21:26:41Z | ---
language:
- en
metrics:
- accuracy
library_name: transformers
base_model: OEvortex/HelpingAI-Lite-2x1B
tags:
- HelpingAI
- coder
- lite
- Fine-tuned
- moe
- nlp
- TensorBlock
- GGUF
license: other
license_name: hsul
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## OEvortex/HelpingAI-Lite-2x1B - GGUF
This repo contains GGUF format model files for [OEvortex/HelpingAI-Lite-2x1B](https://huggingface.co/OEvortex/HelpingAI-Lite-2x1B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [HelpingAI-Lite-2x1B-Q2_K.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q2_K.gguf) | Q2_K | 0.708 GB | smallest, significant quality loss - not recommended for most purposes |
| [HelpingAI-Lite-2x1B-Q3_K_S.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q3_K_S.gguf) | Q3_K_S | 0.827 GB | very small, high quality loss |
| [HelpingAI-Lite-2x1B-Q3_K_M.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q3_K_M.gguf) | Q3_K_M | 0.911 GB | very small, high quality loss |
| [HelpingAI-Lite-2x1B-Q3_K_L.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q3_K_L.gguf) | Q3_K_L | 0.984 GB | small, substantial quality loss |
| [HelpingAI-Lite-2x1B-Q4_0.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q4_0.gguf) | Q4_0 | 1.065 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [HelpingAI-Lite-2x1B-Q4_K_S.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q4_K_S.gguf) | Q4_K_S | 1.071 GB | small, greater quality loss |
| [HelpingAI-Lite-2x1B-Q4_K_M.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q4_K_M.gguf) | Q4_K_M | 1.126 GB | medium, balanced quality - recommended |
| [HelpingAI-Lite-2x1B-Q5_0.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q5_0.gguf) | Q5_0 | 1.290 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [HelpingAI-Lite-2x1B-Q5_K_S.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q5_K_S.gguf) | Q5_K_S | 1.290 GB | large, low quality loss - recommended |
| [HelpingAI-Lite-2x1B-Q5_K_M.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q5_K_M.gguf) | Q5_K_M | 1.321 GB | large, very low quality loss - recommended |
| [HelpingAI-Lite-2x1B-Q6_K.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q6_K.gguf) | Q6_K | 1.528 GB | very large, extremely low quality loss |
| [HelpingAI-Lite-2x1B-Q8_0.gguf](https://huggingface.co/tensorblock/HelpingAI-Lite-2x1B-GGUF/blob/main/HelpingAI-Lite-2x1B-Q8_0.gguf) | Q8_0 | 1.979 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/HelpingAI-Lite-2x1B-GGUF --include "HelpingAI-Lite-2x1B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/HelpingAI-Lite-2x1B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TinyLLama-4x1.1B-MoE-GGUF | tensorblock | 2025-04-21T00:34:29Z | 61 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:s3nh/TinyLLama-4x1.1B-MoE",
"base_model:quantized:s3nh/TinyLLama-4x1.1B-MoE",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-21T20:53:24Z | ---
base_model: s3nh/TinyLLama-4x1.1B-MoE
tags:
- mergekit
- merge
- TensorBlock
- GGUF
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## s3nh/TinyLLama-4x1.1B-MoE - GGUF
This repo contains GGUF format model files for [s3nh/TinyLLama-4x1.1B-MoE](https://huggingface.co/s3nh/TinyLLama-4x1.1B-MoE).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TinyLLama-4x1.1B-MoE-Q2_K.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q2_K.gguf) | Q2_K | 1.260 GB | smallest, significant quality loss - not recommended for most purposes |
| [TinyLLama-4x1.1B-MoE-Q3_K_S.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q3_K_S.gguf) | Q3_K_S | 1.481 GB | very small, high quality loss |
| [TinyLLama-4x1.1B-MoE-Q3_K_M.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q3_K_M.gguf) | Q3_K_M | 1.636 GB | very small, high quality loss |
| [TinyLLama-4x1.1B-MoE-Q3_K_L.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q3_K_L.gguf) | Q3_K_L | 1.770 GB | small, substantial quality loss |
| [TinyLLama-4x1.1B-MoE-Q4_0.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q4_0.gguf) | Q4_0 | 1.922 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TinyLLama-4x1.1B-MoE-Q4_K_S.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q4_K_S.gguf) | Q4_K_S | 1.934 GB | small, greater quality loss |
| [TinyLLama-4x1.1B-MoE-Q4_K_M.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q4_K_M.gguf) | Q4_K_M | 2.042 GB | medium, balanced quality - recommended |
| [TinyLLama-4x1.1B-MoE-Q5_0.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q5_0.gguf) | Q5_0 | 2.337 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TinyLLama-4x1.1B-MoE-Q5_K_S.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q5_K_S.gguf) | Q5_K_S | 2.337 GB | large, low quality loss - recommended |
| [TinyLLama-4x1.1B-MoE-Q5_K_M.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q5_K_M.gguf) | Q5_K_M | 2.399 GB | large, very low quality loss - recommended |
| [TinyLLama-4x1.1B-MoE-Q6_K.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q6_K.gguf) | Q6_K | 2.778 GB | very large, extremely low quality loss |
| [TinyLLama-4x1.1B-MoE-Q8_0.gguf](https://huggingface.co/tensorblock/TinyLLama-4x1.1B-MoE-GGUF/blob/main/TinyLLama-4x1.1B-MoE-Q8_0.gguf) | Q8_0 | 3.597 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TinyLLama-4x1.1B-MoE-GGUF --include "TinyLLama-4x1.1B-MoE-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TinyLLama-4x1.1B-MoE-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF | tensorblock | 2025-04-21T00:34:27Z | 29 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:luffycodes/vicuna-class-shishya-ac-hal-13b-ep3",
"base_model:quantized:luffycodes/vicuna-class-shishya-ac-hal-13b-ep3",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T20:06:43Z | ---
license: llama2
tags:
- TensorBlock
- GGUF
base_model: luffycodes/vicuna-class-shishya-ac-hal-13b-ep3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## luffycodes/vicuna-class-shishya-ac-hal-13b-ep3 - GGUF
This repo contains GGUF format model files for [luffycodes/vicuna-class-shishya-ac-hal-13b-ep3](https://huggingface.co/luffycodes/vicuna-class-shishya-ac-hal-13b-ep3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q2_K.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q3_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q3_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q3_K_L.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q4_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q4_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q4_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q5_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q5_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q5_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q6_K.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [vicuna-class-shishya-ac-hal-13b-ep3-Q8_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-13b-ep3-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF --include "vicuna-class-shishya-ac-hal-13b-ep3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/vicuna-class-shishya-ac-hal-13b-ep3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/nano-phi-115M-v0.1-GGUF | tensorblock | 2025-04-21T00:34:25Z | 38 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:kenhktsui/minipile_quality_score_v1",
"dataset:kenhktsui/simple_wikipedia_LM_quality_score_v1",
"dataset:kenhktsui/refinedweb-3m_quality_score_v1",
"dataset:kenhktsui/TM-DATA_quality_score_v1",
"dataset:kenhktsui/openwebtext_quality_score_v1",
"base_model:kenhktsui/nano-phi-115M-v0.1",
"base_model:quantized:kenhktsui/nano-phi-115M-v0.1",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-21T20:00:03Z | ---
language:
- en
license: mit
library_name: transformers
inference:
parameters:
max_new_tokens: 64
do_sample: true
temperature: 0.1
repetition_penalty: 10
no_repeat_ngram_size: 4
eta_cutoff: 0.0006
renormalize_logits: true
widget:
- text: My name is El Microondas the Wise, and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: Bungie Studios is an American video game developer. They are most famous for
developing the award winning Halo series of video games. They also made Destiny.
The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: The Harry Potter series, written by J.K. Rowling, begins with the book titled
example_title: Harry Potter Series
- text: 'Question: I have cities, but no houses. I have mountains, but no trees. I
have water, but no fish. What am I?
Answer:'
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: Jane went to the store to buy some groceries. She picked up apples, oranges,
and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph,
and another train leaves Station B at 10:00 AM and travels at 80 mph, when will
they meet if the distance between the stations is 300 miles?
To determine'
example_title: Math Problem
- text: In the context of computer programming, an algorithm is
example_title: Algorithm Definition
pipeline_tag: text-generation
datasets:
- kenhktsui/minipile_quality_score_v1
- kenhktsui/simple_wikipedia_LM_quality_score_v1
- kenhktsui/refinedweb-3m_quality_score_v1
- kenhktsui/TM-DATA_quality_score_v1
- kenhktsui/openwebtext_quality_score_v1
tags:
- TensorBlock
- GGUF
base_model: kenhktsui/nano-phi-115M-v0.1
model-index:
- name: nano-phi-115M-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 21.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 27.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kenhktsui/nano-phi-115M-v0.1 - GGUF
This repo contains GGUF format model files for [kenhktsui/nano-phi-115M-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [nano-phi-115M-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q2_K.gguf) | Q2_K | 0.061 GB | smallest, significant quality loss - not recommended for most purposes |
| [nano-phi-115M-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q3_K_S.gguf) | Q3_K_S | 0.067 GB | very small, high quality loss |
| [nano-phi-115M-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q3_K_M.gguf) | Q3_K_M | 0.069 GB | very small, high quality loss |
| [nano-phi-115M-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q3_K_L.gguf) | Q3_K_L | 0.072 GB | small, substantial quality loss |
| [nano-phi-115M-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q4_0.gguf) | Q4_0 | 0.077 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [nano-phi-115M-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q4_K_S.gguf) | Q4_K_S | 0.077 GB | small, greater quality loss |
| [nano-phi-115M-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q4_K_M.gguf) | Q4_K_M | 0.078 GB | medium, balanced quality - recommended |
| [nano-phi-115M-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q5_0.gguf) | Q5_0 | 0.086 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [nano-phi-115M-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q5_K_S.gguf) | Q5_K_S | 0.086 GB | large, low quality loss - recommended |
| [nano-phi-115M-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q5_K_M.gguf) | Q5_K_M | 0.087 GB | large, very low quality loss - recommended |
| [nano-phi-115M-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q6_K.gguf) | Q6_K | 0.096 GB | very large, extremely low quality loss |
| [nano-phi-115M-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/nano-phi-115M-v0.1-GGUF/blob/main/nano-phi-115M-v0.1-Q8_0.gguf) | Q8_0 | 0.124 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/nano-phi-115M-v0.1-GGUF --include "nano-phi-115M-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/nano-phi-115M-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Subsets and Splits