modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
tensorblock/DopeorNope_COLA3_13B-GGUF | tensorblock | 2025-06-19T02:03:45Z | 14 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:DopeorNope/COLA3_13B",
"base_model:quantized:DopeorNope/COLA3_13B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T22:29:50Z | ---
base_model: DopeorNope/COLA3_13B
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## DopeorNope/COLA3_13B - GGUF
This repo contains GGUF format model files for [DopeorNope/COLA3_13B](https://huggingface.co/DopeorNope/COLA3_13B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [COLA3_13B-Q2_K.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [COLA3_13B-Q3_K_S.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [COLA3_13B-Q3_K_M.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [COLA3_13B-Q3_K_L.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [COLA3_13B-Q4_0.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [COLA3_13B-Q4_K_S.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [COLA3_13B-Q4_K_M.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [COLA3_13B-Q5_0.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [COLA3_13B-Q5_K_S.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [COLA3_13B-Q5_K_M.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [COLA3_13B-Q6_K.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [COLA3_13B-Q8_0.gguf](https://huggingface.co/tensorblock/DopeorNope_COLA3_13B-GGUF/blob/main/COLA3_13B-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/DopeorNope_COLA3_13B-GGUF --include "COLA3_13B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/DopeorNope_COLA3_13B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mathoctopus_Parallel_7B-GGUF | tensorblock | 2025-06-19T02:03:34Z | 28 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"es",
"zh",
"de",
"ru",
"th",
"sw",
"ja",
"fr",
"bn",
"dataset:Mathoctopus/GSM8KInstruct_Parallel",
"base_model:Mathoctopus/Parallel_7B",
"base_model:quantized:Mathoctopus/Parallel_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T15:41:26Z | ---
license: apache-2.0
datasets:
- Mathoctopus/GSM8KInstruct_Parallel
language:
- en
- es
- zh
- de
- ru
- th
- sw
- ja
- fr
- bn
tags:
- TensorBlock
- GGUF
base_model: Mathoctopus/Parallel_7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Mathoctopus/Parallel_7B - GGUF
This repo contains GGUF format model files for [Mathoctopus/Parallel_7B](https://huggingface.co/Mathoctopus/Parallel_7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Parallel_7B-Q2_K.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Parallel_7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Parallel_7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Parallel_7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Parallel_7B-Q4_0.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Parallel_7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Parallel_7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Parallel_7B-Q5_0.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Parallel_7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Parallel_7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Parallel_7B-Q6_K.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Parallel_7B-Q8_0.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mathoctopus_Parallel_7B-GGUF --include "Parallel_7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mathoctopus_Parallel_7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/FINDA-FIT_llama-p-GGUF | tensorblock | 2025-06-19T02:03:07Z | 15 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:FINDA-FIT/llama-p",
"base_model:quantized:FINDA-FIT/llama-p",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T06:39:37Z | ---
base_model: FINDA-FIT/llama-p
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## FINDA-FIT/llama-p - GGUF
This repo contains GGUF format model files for [FINDA-FIT/llama-p](https://huggingface.co/FINDA-FIT/llama-p).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-p-Q2_K.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q2_K.gguf) | Q2_K | 2.601 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-p-Q3_K_S.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q3_K_S.gguf) | Q3_K_S | 3.022 GB | very small, high quality loss |
| [llama-p-Q3_K_M.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q3_K_M.gguf) | Q3_K_M | 3.372 GB | very small, high quality loss |
| [llama-p-Q3_K_L.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q3_K_L.gguf) | Q3_K_L | 3.671 GB | small, substantial quality loss |
| [llama-p-Q4_0.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q4_0.gguf) | Q4_0 | 3.907 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-p-Q4_K_S.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q4_K_S.gguf) | Q4_K_S | 3.938 GB | small, greater quality loss |
| [llama-p-Q4_K_M.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q4_K_M.gguf) | Q4_K_M | 4.163 GB | medium, balanced quality - recommended |
| [llama-p-Q5_0.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q5_0.gguf) | Q5_0 | 4.741 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-p-Q5_K_S.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q5_K_S.gguf) | Q5_K_S | 4.741 GB | large, low quality loss - recommended |
| [llama-p-Q5_K_M.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q5_K_M.gguf) | Q5_K_M | 4.872 GB | large, very low quality loss - recommended |
| [llama-p-Q6_K.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q6_K.gguf) | Q6_K | 5.626 GB | very large, extremely low quality loss |
| [llama-p-Q8_0.gguf](https://huggingface.co/tensorblock/FINDA-FIT_llama-p-GGUF/blob/main/llama-p-Q8_0.gguf) | Q8_0 | 7.286 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/FINDA-FIT_llama-p-GGUF --include "llama-p-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/FINDA-FIT_llama-p-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TinyPixel_elm-test-GGUF | tensorblock | 2025-06-19T02:03:05Z | 13 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:TinyPixel/elm-test",
"base_model:quantized:TinyPixel/elm-test",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T05:57:05Z | ---
base_model: TinyPixel/elm-test
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## TinyPixel/elm-test - GGUF
This repo contains GGUF format model files for [TinyPixel/elm-test](https://huggingface.co/TinyPixel/elm-test).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [elm-test-Q2_K.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [elm-test-Q3_K_S.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [elm-test-Q3_K_M.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [elm-test-Q3_K_L.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [elm-test-Q4_0.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [elm-test-Q4_K_S.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [elm-test-Q4_K_M.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [elm-test-Q5_0.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [elm-test-Q5_K_S.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [elm-test-Q5_K_M.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [elm-test-Q6_K.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [elm-test-Q8_0.gguf](https://huggingface.co/tensorblock/TinyPixel_elm-test-GGUF/blob/main/elm-test-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TinyPixel_elm-test-GGUF --include "elm-test-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TinyPixel_elm-test-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF | tensorblock | 2025-06-19T02:03:03Z | 62 | 0 | null | [
"gguf",
"pretrained",
"conversational",
"TensorBlock",
"GGUF",
"text-generation",
"fr",
"base_model:OpenLLM-France/Claire-Mistral-7B-0.1",
"base_model:quantized:OpenLLM-France/Claire-Mistral-7B-0.1",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-06T05:23:07Z | ---
language:
- fr
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
base_model: OpenLLM-France/Claire-Mistral-7B-0.1
tags:
- pretrained
- conversational
- TensorBlock
- GGUF
widget:
- text: '- Bonjour Dominique, qu''allez-vous nous cuisiner aujourd''hui ?
- Bonjour Camille,'
example_title: Request for a recipe
group: Dash
- text: '[Intervenant 1:] Bonjour Dominique, qu''allez-vous nous cuisiner aujourd''hui
?
[Intervenant 2:] Bonjour Camille,'
example_title: Request for a recipe
group: Intervenant
- text: '[Camille:] Bonjour Dominique, qu''allez-vous nous cuisiner aujourd''hui ?
[Dominique:] Bonjour Camille,'
example_title: Request for a recipe
group: FirstName
- text: '[Camille Durand:] Bonjour Dominique, qu''allez-vous nous cuisiner aujourd''hui
?
[Dominique Petit:] Bonjour Camille,'
example_title: Request for a recipe
group: Named
inference:
parameters:
temperature: 1.0
max_new_tokens: 200
top_k: 10
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## OpenLLM-France/Claire-Mistral-7B-0.1 - GGUF
This repo contains GGUF format model files for [OpenLLM-France/Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Claire-Mistral-7B-0.1-Q2_K.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Claire-Mistral-7B-0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Claire-Mistral-7B-0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Claire-Mistral-7B-0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Claire-Mistral-7B-0.1-Q4_0.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Claire-Mistral-7B-0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Claire-Mistral-7B-0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Claire-Mistral-7B-0.1-Q5_0.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Claire-Mistral-7B-0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Claire-Mistral-7B-0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Claire-Mistral-7B-0.1-Q6_K.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Claire-Mistral-7B-0.1-Q8_0.gguf](https://huggingface.co/tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF/blob/main/Claire-Mistral-7B-0.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF --include "Claire-Mistral-7B-0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenLLM-France_Claire-Mistral-7B-0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF | tensorblock | 2025-06-19T02:02:55Z | 25 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:MNCKim/Mistral-7B-OpenHermes",
"base_model:quantized:MNCKim/Mistral-7B-OpenHermes",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-06T04:03:22Z | ---
base_model: MNCKim/Mistral-7B-OpenHermes
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MNCKim/Mistral-7B-OpenHermes - GGUF
This repo contains GGUF format model files for [MNCKim/Mistral-7B-OpenHermes](https://huggingface.co/MNCKim/Mistral-7B-OpenHermes).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-OpenHermes-Q2_K.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-OpenHermes-Q3_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-OpenHermes-Q3_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-OpenHermes-Q3_K_L.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-OpenHermes-Q4_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-OpenHermes-Q4_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-OpenHermes-Q4_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-OpenHermes-Q5_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-OpenHermes-Q5_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-OpenHermes-Q5_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-OpenHermes-Q6_K.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-OpenHermes-Q8_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF/blob/main/Mistral-7B-OpenHermes-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF --include "Mistral-7B-OpenHermes-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MNCKim_Mistral-7B-OpenHermes-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF | tensorblock | 2025-06-19T02:02:54Z | 45 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:illuin/tiny-random-MistralForCausalLM",
"base_model:quantized:illuin/tiny-random-MistralForCausalLM",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T03:59:16Z | ---
base_model: illuin/tiny-random-MistralForCausalLM
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## illuin/tiny-random-MistralForCausalLM - GGUF
This repo contains GGUF format model files for [illuin/tiny-random-MistralForCausalLM](https://huggingface.co/illuin/tiny-random-MistralForCausalLM).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [tiny-random-MistralForCausalLM-Q2_K.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q2_K.gguf) | Q2_K | 0.002 GB | smallest, significant quality loss - not recommended for most purposes |
| [tiny-random-MistralForCausalLM-Q3_K_S.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q3_K_S.gguf) | Q3_K_S | 0.002 GB | very small, high quality loss |
| [tiny-random-MistralForCausalLM-Q3_K_M.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q3_K_M.gguf) | Q3_K_M | 0.002 GB | very small, high quality loss |
| [tiny-random-MistralForCausalLM-Q3_K_L.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q3_K_L.gguf) | Q3_K_L | 0.002 GB | small, substantial quality loss |
| [tiny-random-MistralForCausalLM-Q4_0.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q4_0.gguf) | Q4_0 | 0.002 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tiny-random-MistralForCausalLM-Q4_K_S.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q4_K_S.gguf) | Q4_K_S | 0.003 GB | small, greater quality loss |
| [tiny-random-MistralForCausalLM-Q4_K_M.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q4_K_M.gguf) | Q4_K_M | 0.003 GB | medium, balanced quality - recommended |
| [tiny-random-MistralForCausalLM-Q5_0.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q5_0.gguf) | Q5_0 | 0.003 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tiny-random-MistralForCausalLM-Q5_K_S.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q5_K_S.gguf) | Q5_K_S | 0.003 GB | large, low quality loss - recommended |
| [tiny-random-MistralForCausalLM-Q5_K_M.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q5_K_M.gguf) | Q5_K_M | 0.003 GB | large, very low quality loss - recommended |
| [tiny-random-MistralForCausalLM-Q6_K.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q6_K.gguf) | Q6_K | 0.003 GB | very large, extremely low quality loss |
| [tiny-random-MistralForCausalLM-Q8_0.gguf](https://huggingface.co/tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF/blob/main/tiny-random-MistralForCausalLM-Q8_0.gguf) | Q8_0 | 0.003 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF --include "tiny-random-MistralForCausalLM-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/illuin_tiny-random-MistralForCausalLM-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF | tensorblock | 2025-06-19T02:02:19Z | 22 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:MNCJihunKim/Mistral-7B-SlimOrca-orca-platy-out1kover",
"base_model:quantized:MNCJihunKim/Mistral-7B-SlimOrca-orca-platy-out1kover",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T18:44:17Z | ---
base_model: MNCJihunKim/Mistral-7B-SlimOrca-orca-platy-out1kover
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MNCJihunKim/Mistral-7B-SlimOrca-orca-platy-out1kover - GGUF
This repo contains GGUF format model files for [MNCJihunKim/Mistral-7B-SlimOrca-orca-platy-out1kover](https://huggingface.co/MNCJihunKim/Mistral-7B-SlimOrca-orca-platy-out1kover).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q2_K.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q3_K_S.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q3_K_M.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q3_K_L.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q4_0.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q4_K_S.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q4_K_M.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q5_0.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q5_K_S.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q5_K_M.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q6_K.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-SlimOrca-orca-platy-out1kover-Q8_0.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF/blob/main/Mistral-7B-SlimOrca-orca-platy-out1kover-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF --include "Mistral-7B-SlimOrca-orca-platy-out1kover-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-orca-platy-out1kover-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF | tensorblock | 2025-06-19T02:02:05Z | 22 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"ko",
"dataset:Open-Orca/OpenOrca",
"dataset:kyujinpy/KOR-OpenOrca-Platypus",
"base_model:Korabbit/llama-2-ko-7b-bilingual",
"base_model:quantized:Korabbit/llama-2-ko-7b-bilingual",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T16:22:00Z | ---
license: llama2
datasets:
- Open-Orca/OpenOrca
- kyujinpy/KOR-OpenOrca-Platypus
language:
- en
- ko
tags:
- TensorBlock
- GGUF
base_model: Korabbit/llama-2-ko-7b-bilingual
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Korabbit/llama-2-ko-7b-bilingual - GGUF
This repo contains GGUF format model files for [Korabbit/llama-2-ko-7b-bilingual](https://huggingface.co/Korabbit/llama-2-ko-7b-bilingual).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-2-ko-7b-bilingual-Q2_K.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q2_K.gguf) | Q2_K | 2.601 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-ko-7b-bilingual-Q3_K_S.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q3_K_S.gguf) | Q3_K_S | 3.022 GB | very small, high quality loss |
| [llama-2-ko-7b-bilingual-Q3_K_M.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q3_K_M.gguf) | Q3_K_M | 3.372 GB | very small, high quality loss |
| [llama-2-ko-7b-bilingual-Q3_K_L.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q3_K_L.gguf) | Q3_K_L | 3.671 GB | small, substantial quality loss |
| [llama-2-ko-7b-bilingual-Q4_0.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q4_0.gguf) | Q4_0 | 3.907 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-ko-7b-bilingual-Q4_K_S.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q4_K_S.gguf) | Q4_K_S | 3.938 GB | small, greater quality loss |
| [llama-2-ko-7b-bilingual-Q4_K_M.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q4_K_M.gguf) | Q4_K_M | 4.163 GB | medium, balanced quality - recommended |
| [llama-2-ko-7b-bilingual-Q5_0.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q5_0.gguf) | Q5_0 | 4.741 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-ko-7b-bilingual-Q5_K_S.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q5_K_S.gguf) | Q5_K_S | 4.741 GB | large, low quality loss - recommended |
| [llama-2-ko-7b-bilingual-Q5_K_M.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q5_K_M.gguf) | Q5_K_M | 4.872 GB | large, very low quality loss - recommended |
| [llama-2-ko-7b-bilingual-Q6_K.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q6_K.gguf) | Q6_K | 5.626 GB | very large, extremely low quality loss |
| [llama-2-ko-7b-bilingual-Q8_0.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q8_0.gguf) | Q8_0 | 7.286 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF --include "llama-2-ko-7b-bilingual-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Voicelab_trurl-2-13b-academic-GGUF | tensorblock | 2025-06-19T02:01:57Z | 64 | 0 | null | [
"gguf",
"voicelab",
"pytorch",
"llama-2",
"trurl",
"trurl-2",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"pl",
"base_model:Voicelab/trurl-2-13b-academic",
"base_model:quantized:Voicelab/trurl-2-13b-academic",
"region:us"
] | text-generation | 2025-05-05T14:02:57Z | ---
language:
- en
- pl
pipeline_tag: text-generation
inference: false
tags:
- voicelab
- pytorch
- llama-2
- trurl
- trurl-2
- TensorBlock
- GGUF
base_model: Voicelab/trurl-2-13b-academic
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Voicelab/trurl-2-13b-academic - GGUF
This repo contains GGUF format model files for [Voicelab/trurl-2-13b-academic](https://huggingface.co/Voicelab/trurl-2-13b-academic).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [trurl-2-13b-academic-Q2_K.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [trurl-2-13b-academic-Q3_K_S.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [trurl-2-13b-academic-Q3_K_M.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [trurl-2-13b-academic-Q3_K_L.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [trurl-2-13b-academic-Q4_0.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [trurl-2-13b-academic-Q4_K_S.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [trurl-2-13b-academic-Q4_K_M.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [trurl-2-13b-academic-Q5_0.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [trurl-2-13b-academic-Q5_K_S.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [trurl-2-13b-academic-Q5_K_M.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [trurl-2-13b-academic-Q6_K.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [trurl-2-13b-academic-Q8_0.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Voicelab_trurl-2-13b-academic-GGUF --include "trurl-2-13b-academic-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Voicelab_trurl-2-13b-academic-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF | tensorblock | 2025-06-19T02:01:08Z | 19 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran4k",
"base_model:quantized:MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran4k",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T00:57:42Z | ---
base_model: MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran4k
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran4k - GGUF
This repo contains GGUF format model files for [MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran4k](https://huggingface.co/MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran4k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q2_K.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q3_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q3_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q3_K_L.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q4_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q4_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q4_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q5_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q5_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q5_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q6_K.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran4k-Q8_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran4k-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF --include "Mistral-7B-SlimOrca-OP-U2048-ran4k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran4k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/seeklhy_codes-1b-spider-GGUF | tensorblock | 2025-06-19T02:01:01Z | 23 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:seeklhy/codes-1b-spider",
"base_model:quantized:seeklhy/codes-1b-spider",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T22:54:30Z | ---
base_model: seeklhy/codes-1b-spider
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## seeklhy/codes-1b-spider - GGUF
This repo contains GGUF format model files for [seeklhy/codes-1b-spider](https://huggingface.co/seeklhy/codes-1b-spider).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [codes-1b-spider-Q2_K.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q2_K.gguf) | Q2_K | 0.572 GB | smallest, significant quality loss - not recommended for most purposes |
| [codes-1b-spider-Q3_K_S.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q3_K_S.gguf) | Q3_K_S | 0.635 GB | very small, high quality loss |
| [codes-1b-spider-Q3_K_M.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q3_K_M.gguf) | Q3_K_M | 0.719 GB | very small, high quality loss |
| [codes-1b-spider-Q3_K_L.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q3_K_L.gguf) | Q3_K_L | 0.780 GB | small, substantial quality loss |
| [codes-1b-spider-Q4_0.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q4_0.gguf) | Q4_0 | 0.784 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [codes-1b-spider-Q4_K_S.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q4_K_S.gguf) | Q4_K_S | 0.790 GB | small, greater quality loss |
| [codes-1b-spider-Q4_K_M.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q4_K_M.gguf) | Q4_K_M | 0.850 GB | medium, balanced quality - recommended |
| [codes-1b-spider-Q5_0.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q5_0.gguf) | Q5_0 | 0.924 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [codes-1b-spider-Q5_K_S.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q5_K_S.gguf) | Q5_K_S | 0.924 GB | large, low quality loss - recommended |
| [codes-1b-spider-Q5_K_M.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q5_K_M.gguf) | Q5_K_M | 0.965 GB | large, very low quality loss - recommended |
| [codes-1b-spider-Q6_K.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q6_K.gguf) | Q6_K | 1.072 GB | very large, extremely low quality loss |
| [codes-1b-spider-Q8_0.gguf](https://huggingface.co/tensorblock/seeklhy_codes-1b-spider-GGUF/blob/main/codes-1b-spider-Q8_0.gguf) | Q8_0 | 1.368 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/seeklhy_codes-1b-spider-GGUF --include "codes-1b-spider-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/seeklhy_codes-1b-spider-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF | tensorblock | 2025-06-19T02:00:48Z | 71 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:VMware/open-instruct",
"base_model:VMware/open-llama-7b-v2-open-instruct",
"base_model:quantized:VMware/open-llama-7b-v2-open-instruct",
"license:cc-by-sa-3.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T18:54:20Z | ---
license: cc-by-sa-3.0
datasets:
- VMware/open-instruct
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: VMware/open-llama-7b-v2-open-instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## VMware/open-llama-7b-v2-open-instruct - GGUF
This repo contains GGUF format model files for [VMware/open-llama-7b-v2-open-instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [open-llama-7b-v2-open-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [open-llama-7b-v2-open-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [open-llama-7b-v2-open-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [open-llama-7b-v2-open-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [open-llama-7b-v2-open-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [open-llama-7b-v2-open-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [open-llama-7b-v2-open-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [open-llama-7b-v2-open-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [open-llama-7b-v2-open-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [open-llama-7b-v2-open-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [open-llama-7b-v2-open-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [open-llama-7b-v2-open-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF/blob/main/open-llama-7b-v2-open-instruct-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF --include "open-llama-7b-v2-open-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/VMware_open-llama-7b-v2-open-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/heegyu_llama-2-ko-7b-chat-GGUF | tensorblock | 2025-06-19T02:00:28Z | 61 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:dbdu/ShareGPT-74k-ko",
"dataset:heegyu/korquad-chat-v1",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:changpt/ko-lima-vicuna",
"dataset:nlpai-lab/kullm-v2",
"base_model:heegyu/llama-2-ko-7b-chat",
"base_model:quantized:heegyu/llama-2-ko-7b-chat",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T10:33:27Z | ---
datasets:
- beomi/KoAlpaca-v1.1a
- dbdu/ShareGPT-74k-ko
- heegyu/korquad-chat-v1
- HAERAE-HUB/KoInstruct-QA
- changpt/ko-lima-vicuna
- nlpai-lab/kullm-v2
language:
- ko
tags:
- TensorBlock
- GGUF
base_model: heegyu/llama-2-ko-7b-chat
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## heegyu/llama-2-ko-7b-chat - GGUF
This repo contains GGUF format model files for [heegyu/llama-2-ko-7b-chat](https://huggingface.co/heegyu/llama-2-ko-7b-chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-2-ko-7b-chat-Q2_K.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q2_K.gguf) | Q2_K | 2.601 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-ko-7b-chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q3_K_S.gguf) | Q3_K_S | 3.022 GB | very small, high quality loss |
| [llama-2-ko-7b-chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q3_K_M.gguf) | Q3_K_M | 3.372 GB | very small, high quality loss |
| [llama-2-ko-7b-chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q3_K_L.gguf) | Q3_K_L | 3.671 GB | small, substantial quality loss |
| [llama-2-ko-7b-chat-Q4_0.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q4_0.gguf) | Q4_0 | 3.907 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-ko-7b-chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q4_K_S.gguf) | Q4_K_S | 3.938 GB | small, greater quality loss |
| [llama-2-ko-7b-chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q4_K_M.gguf) | Q4_K_M | 4.163 GB | medium, balanced quality - recommended |
| [llama-2-ko-7b-chat-Q5_0.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q5_0.gguf) | Q5_0 | 4.741 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-ko-7b-chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q5_K_S.gguf) | Q5_K_S | 4.741 GB | large, low quality loss - recommended |
| [llama-2-ko-7b-chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q5_K_M.gguf) | Q5_K_M | 4.872 GB | large, very low quality loss - recommended |
| [llama-2-ko-7b-chat-Q6_K.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q6_K.gguf) | Q6_K | 5.626 GB | very large, extremely low quality loss |
| [llama-2-ko-7b-chat-Q8_0.gguf](https://huggingface.co/tensorblock/heegyu_llama-2-ko-7b-chat-GGUF/blob/main/llama-2-ko-7b-chat-Q8_0.gguf) | Q8_0 | 7.286 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/heegyu_llama-2-ko-7b-chat-GGUF --include "llama-2-ko-7b-chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/heegyu_llama-2-ko-7b-chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF | tensorblock | 2025-06-19T02:00:05Z | 142 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:Norquinal/claude_multiround_chat_1k",
"base_model:Norquinal/Mistral-7B-claude-chat",
"base_model:quantized:Norquinal/Mistral-7B-claude-chat",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T17:07:17Z | ---
datasets:
- Norquinal/claude_multiround_chat_1k
license: cc-by-nc-4.0
tags:
- TensorBlock
- GGUF
base_model: Norquinal/Mistral-7B-claude-chat
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Norquinal/Mistral-7B-claude-chat - GGUF
This repo contains GGUF format model files for [Norquinal/Mistral-7B-claude-chat](https://huggingface.co/Norquinal/Mistral-7B-claude-chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-claude-chat-Q2_K.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-claude-chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-claude-chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-claude-chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-claude-chat-Q4_0.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-claude-chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-claude-chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-claude-chat-Q5_0.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-claude-chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-claude-chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-claude-chat-Q6_K.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-claude-chat-Q8_0.gguf](https://huggingface.co/tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF/blob/main/Mistral-7B-claude-chat-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF --include "Mistral-7B-claude-chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Norquinal_Mistral-7B-claude-chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF | tensorblock | 2025-06-19T01:59:00Z | 20 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:MNCLLM/Mistral-7B-OP-over1k-grad1.0",
"base_model:quantized:MNCLLM/Mistral-7B-OP-over1k-grad1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T04:14:31Z | ---
base_model: MNCLLM/Mistral-7B-OP-over1k-grad1.0
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MNCLLM/Mistral-7B-OP-over1k-grad1.0 - GGUF
This repo contains GGUF format model files for [MNCLLM/Mistral-7B-OP-over1k-grad1.0](https://huggingface.co/MNCLLM/Mistral-7B-OP-over1k-grad1.0).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-OP-over1k-grad1.0-Q2_K.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-OP-over1k-grad1.0-Q3_K_S.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-OP-over1k-grad1.0-Q3_K_M.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-OP-over1k-grad1.0-Q3_K_L.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-OP-over1k-grad1.0-Q4_0.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-OP-over1k-grad1.0-Q4_K_S.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-OP-over1k-grad1.0-Q4_K_M.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-OP-over1k-grad1.0-Q5_0.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-OP-over1k-grad1.0-Q5_K_S.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-OP-over1k-grad1.0-Q5_K_M.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-OP-over1k-grad1.0-Q6_K.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-OP-over1k-grad1.0-Q8_0.gguf](https://huggingface.co/tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF/blob/main/Mistral-7B-OP-over1k-grad1.0-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF --include "Mistral-7B-OP-over1k-grad1.0-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MNCLLM_Mistral-7B-OP-over1k-grad1.0-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF | tensorblock | 2025-06-19T01:58:40Z | 37 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Abe13/Full-juni-dolphin-2.1-mistral-7b",
"base_model:quantized:Abe13/Full-juni-dolphin-2.1-mistral-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T01:13:55Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: Abe13/Full-juni-dolphin-2.1-mistral-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Abe13/Full-juni-dolphin-2.1-mistral-7b - GGUF
This repo contains GGUF format model files for [Abe13/Full-juni-dolphin-2.1-mistral-7b](https://huggingface.co/Abe13/Full-juni-dolphin-2.1-mistral-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Full-juni-dolphin-2.1-mistral-7b-Q2_K.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Full-juni-dolphin-2.1-mistral-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Full-juni-dolphin-2.1-mistral-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Full-juni-dolphin-2.1-mistral-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Full-juni-dolphin-2.1-mistral-7b-Q4_0.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Full-juni-dolphin-2.1-mistral-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Full-juni-dolphin-2.1-mistral-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Full-juni-dolphin-2.1-mistral-7b-Q5_0.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Full-juni-dolphin-2.1-mistral-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Full-juni-dolphin-2.1-mistral-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Full-juni-dolphin-2.1-mistral-7b-Q6_K.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Full-juni-dolphin-2.1-mistral-7b-Q8_0.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF/blob/main/Full-juni-dolphin-2.1-mistral-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF --include "Full-juni-dolphin-2.1-mistral-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Abe13_Full-juni-dolphin-2.1-mistral-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF | tensorblock | 2025-06-19T01:58:30Z | 21 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:microsoft/Llama2-7b-WhoIsHarryPotter",
"base_model:quantized:microsoft/Llama2-7b-WhoIsHarryPotter",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T23:17:04Z | ---
license: other
license_name: microsoft-research-license-agreement
license_link: LICENSE
tags:
- TensorBlock
- GGUF
base_model: microsoft/Llama2-7b-WhoIsHarryPotter
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## microsoft/Llama2-7b-WhoIsHarryPotter - GGUF
This repo contains GGUF format model files for [microsoft/Llama2-7b-WhoIsHarryPotter](https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama2-7b-WhoIsHarryPotter-Q2_K.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama2-7b-WhoIsHarryPotter-Q3_K_S.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Llama2-7b-WhoIsHarryPotter-Q3_K_M.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Llama2-7b-WhoIsHarryPotter-Q3_K_L.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Llama2-7b-WhoIsHarryPotter-Q4_0.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama2-7b-WhoIsHarryPotter-Q4_K_S.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Llama2-7b-WhoIsHarryPotter-Q4_K_M.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Llama2-7b-WhoIsHarryPotter-Q5_0.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama2-7b-WhoIsHarryPotter-Q5_K_S.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Llama2-7b-WhoIsHarryPotter-Q5_K_M.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Llama2-7b-WhoIsHarryPotter-Q6_K.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Llama2-7b-WhoIsHarryPotter-Q8_0.gguf](https://huggingface.co/tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF/blob/main/Llama2-7b-WhoIsHarryPotter-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF --include "Llama2-7b-WhoIsHarryPotter-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/microsoft_Llama2-7b-WhoIsHarryPotter-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF | tensorblock | 2025-06-19T01:58:13Z | 36 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:MNCJ1hun/Dolphin-Mistral-7B-OP-u1k-ver0.1",
"base_model:quantized:MNCJ1hun/Dolphin-Mistral-7B-OP-u1k-ver0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:41:47Z | ---
base_model: MNCJ1hun/Dolphin-Mistral-7B-OP-u1k-ver0.1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MNCJ1hun/Dolphin-Mistral-7B-OP-u1k-ver0.1 - GGUF
This repo contains GGUF format model files for [MNCJ1hun/Dolphin-Mistral-7B-OP-u1k-ver0.1](https://huggingface.co/MNCJ1hun/Dolphin-Mistral-7B-OP-u1k-ver0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q2_K.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q4_0.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q5_0.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q6_K.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Dolphin-Mistral-7B-OP-u1k-ver0.1-Q8_0.gguf](https://huggingface.co/tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF/blob/main/Dolphin-Mistral-7B-OP-u1k-ver0.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF --include "Dolphin-Mistral-7B-OP-u1k-ver0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MNCJ1hun_Dolphin-Mistral-7B-OP-u1k-ver0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF | tensorblock | 2025-06-19T01:58:10Z | 63 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:nakhyeonn/llama-2-ko-qlora-prompt",
"base_model:quantized:nakhyeonn/llama-2-ko-qlora-prompt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T17:32:32Z | ---
base_model: nakhyeonn/llama-2-ko-qlora-prompt
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## nakhyeonn/llama-2-ko-qlora-prompt - GGUF
This repo contains GGUF format model files for [nakhyeonn/llama-2-ko-qlora-prompt](https://huggingface.co/nakhyeonn/llama-2-ko-qlora-prompt).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-2-ko-qlora-prompt-Q2_K.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q2_K.gguf) | Q2_K | 0.001 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-ko-qlora-prompt-Q3_K_S.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q3_K_S.gguf) | Q3_K_S | 0.001 GB | very small, high quality loss |
| [llama-2-ko-qlora-prompt-Q3_K_M.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q3_K_M.gguf) | Q3_K_M | 0.001 GB | very small, high quality loss |
| [llama-2-ko-qlora-prompt-Q3_K_L.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q3_K_L.gguf) | Q3_K_L | 0.001 GB | small, substantial quality loss |
| [llama-2-ko-qlora-prompt-Q4_0.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q4_0.gguf) | Q4_0 | 0.001 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-ko-qlora-prompt-Q4_K_S.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q4_K_S.gguf) | Q4_K_S | 0.001 GB | small, greater quality loss |
| [llama-2-ko-qlora-prompt-Q4_K_M.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q4_K_M.gguf) | Q4_K_M | 0.001 GB | medium, balanced quality - recommended |
| [llama-2-ko-qlora-prompt-Q5_0.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q5_0.gguf) | Q5_0 | 0.001 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-ko-qlora-prompt-Q5_K_S.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q5_K_S.gguf) | Q5_K_S | 0.001 GB | large, low quality loss - recommended |
| [llama-2-ko-qlora-prompt-Q5_K_M.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q5_K_M.gguf) | Q5_K_M | 0.001 GB | large, very low quality loss - recommended |
| [llama-2-ko-qlora-prompt-Q6_K.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q6_K.gguf) | Q6_K | 0.001 GB | very large, extremely low quality loss |
| [llama-2-ko-qlora-prompt-Q8_0.gguf](https://huggingface.co/tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF/blob/main/llama-2-ko-qlora-prompt-Q8_0.gguf) | Q8_0 | 0.001 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF --include "llama-2-ko-qlora-prompt-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/nakhyeonn_llama-2-ko-qlora-prompt-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF | tensorblock | 2025-06-19T01:58:08Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca",
"base_model:Open-Orca/Mistral-7B-SlimOrca",
"base_model:quantized:Open-Orca/Mistral-7B-SlimOrca",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:03:35Z | ---
datasets:
- Open-Orca/SlimOrca
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: Open-Orca/Mistral-7B-SlimOrca
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Open-Orca/Mistral-7B-SlimOrca - GGUF
This repo contains GGUF format model files for [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-SlimOrca-Q2_K.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-SlimOrca-Q3_K_S.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-Q3_K_M.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-Q3_K_L.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-SlimOrca-Q4_0.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-SlimOrca-Q4_K_S.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-SlimOrca-Q4_K_M.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-SlimOrca-Q5_0.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-SlimOrca-Q5_K_S.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-SlimOrca-Q5_K_M.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-SlimOrca-Q6_K.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-SlimOrca-Q8_0.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF/blob/main/Mistral-7B-SlimOrca-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF --include "Mistral-7B-SlimOrca-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Open-Orca_Mistral-7B-SlimOrca-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF | tensorblock | 2025-06-19T01:58:03Z | 30 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down",
"base_model:quantized:CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:50:51Z | ---
base_model: CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down - GGUF
This repo contains GGUF format model files for [CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q2_K.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q3_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q3_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q3_K_L.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q4_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q4_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q4_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q5_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q5_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q5_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q6_K.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q8_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF --include "llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF | tensorblock | 2025-06-19T01:57:45Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"base_model:jin05102518/Astral-7B-Instruct-v0.01",
"base_model:quantized:jin05102518/Astral-7B-Instruct-v0.01",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-02T09:16:29Z | ---
language:
- ko
datasets:
- beomi/KoAlpaca-v1.1a
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
tags:
- TensorBlock
- GGUF
base_model: jin05102518/Astral-7B-Instruct-v0.01
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## jin05102518/Astral-7B-Instruct-v0.01 - GGUF
This repo contains GGUF format model files for [jin05102518/Astral-7B-Instruct-v0.01](https://huggingface.co/jin05102518/Astral-7B-Instruct-v0.01).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Astral-7B-Instruct-v0.01-Q2_K.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Astral-7B-Instruct-v0.01-Q3_K_S.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Astral-7B-Instruct-v0.01-Q3_K_M.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Astral-7B-Instruct-v0.01-Q3_K_L.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Astral-7B-Instruct-v0.01-Q4_0.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Astral-7B-Instruct-v0.01-Q4_K_S.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Astral-7B-Instruct-v0.01-Q4_K_M.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Astral-7B-Instruct-v0.01-Q5_0.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Astral-7B-Instruct-v0.01-Q5_K_S.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Astral-7B-Instruct-v0.01-Q5_K_M.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Astral-7B-Instruct-v0.01-Q6_K.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Astral-7B-Instruct-v0.01-Q8_0.gguf](https://huggingface.co/tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF/blob/main/Astral-7B-Instruct-v0.01-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF --include "Astral-7B-Instruct-v0.01-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/jin05102518_Astral-7B-Instruct-v0.01-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/klyang_MentaLLaMA-chat-13B-GGUF | tensorblock | 2025-06-19T01:57:27Z | 37 | 0 | null | [
"gguf",
"medical",
"TensorBlock",
"GGUF",
"en",
"base_model:klyang/MentaLLaMA-chat-13B",
"base_model:quantized:klyang/MentaLLaMA-chat-13B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:25:34Z | ---
license: mit
language:
- en
metrics:
- f1
tags:
- medical
- TensorBlock
- GGUF
base_model: klyang/MentaLLaMA-chat-13B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## klyang/MentaLLaMA-chat-13B - GGUF
This repo contains GGUF format model files for [klyang/MentaLLaMA-chat-13B](https://huggingface.co/klyang/MentaLLaMA-chat-13B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MentaLLaMA-chat-13B-Q2_K.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [MentaLLaMA-chat-13B-Q3_K_S.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [MentaLLaMA-chat-13B-Q3_K_M.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [MentaLLaMA-chat-13B-Q3_K_L.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [MentaLLaMA-chat-13B-Q4_0.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MentaLLaMA-chat-13B-Q4_K_S.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [MentaLLaMA-chat-13B-Q4_K_M.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [MentaLLaMA-chat-13B-Q5_0.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MentaLLaMA-chat-13B-Q5_K_S.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [MentaLLaMA-chat-13B-Q5_K_M.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [MentaLLaMA-chat-13B-Q6_K.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [MentaLLaMA-chat-13B-Q8_0.gguf](https://huggingface.co/tensorblock/klyang_MentaLLaMA-chat-13B-GGUF/blob/main/MentaLLaMA-chat-13B-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/klyang_MentaLLaMA-chat-13B-GGUF --include "MentaLLaMA-chat-13B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/klyang_MentaLLaMA-chat-13B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF | tensorblock | 2025-06-19T01:57:23Z | 21 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:LTC-AI-Labs/L2-7b-Hermes-WVG-Test",
"base_model:quantized:LTC-AI-Labs/L2-7b-Hermes-WVG-Test",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T03:02:17Z | ---
base_model: LTC-AI-Labs/L2-7b-Hermes-WVG-Test
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## LTC-AI-Labs/L2-7b-Hermes-WVG-Test - GGUF
This repo contains GGUF format model files for [LTC-AI-Labs/L2-7b-Hermes-WVG-Test](https://huggingface.co/LTC-AI-Labs/L2-7b-Hermes-WVG-Test).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [L2-7b-Hermes-WVG-Test-Q2_K.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [L2-7b-Hermes-WVG-Test-Q3_K_S.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [L2-7b-Hermes-WVG-Test-Q3_K_M.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [L2-7b-Hermes-WVG-Test-Q3_K_L.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [L2-7b-Hermes-WVG-Test-Q4_0.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [L2-7b-Hermes-WVG-Test-Q4_K_S.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [L2-7b-Hermes-WVG-Test-Q4_K_M.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [L2-7b-Hermes-WVG-Test-Q5_0.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [L2-7b-Hermes-WVG-Test-Q5_K_S.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [L2-7b-Hermes-WVG-Test-Q5_K_M.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [L2-7b-Hermes-WVG-Test-Q6_K.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [L2-7b-Hermes-WVG-Test-Q8_0.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF --include "L2-7b-Hermes-WVG-Test-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF | tensorblock | 2025-06-19T01:57:06Z | 19 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:YeungNLP/LongQLoRA-Llama2-7b-8k",
"base_model:quantized:YeungNLP/LongQLoRA-Llama2-7b-8k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T20:59:48Z | ---
license: apache-2.0
language:
- en
tags:
- TensorBlock
- GGUF
base_model: YeungNLP/LongQLoRA-Llama2-7b-8k
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## YeungNLP/LongQLoRA-Llama2-7b-8k - GGUF
This repo contains GGUF format model files for [YeungNLP/LongQLoRA-Llama2-7b-8k](https://huggingface.co/YeungNLP/LongQLoRA-Llama2-7b-8k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [LongQLoRA-Llama2-7b-8k-Q2_K.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [LongQLoRA-Llama2-7b-8k-Q3_K_S.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [LongQLoRA-Llama2-7b-8k-Q3_K_M.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [LongQLoRA-Llama2-7b-8k-Q3_K_L.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [LongQLoRA-Llama2-7b-8k-Q4_0.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [LongQLoRA-Llama2-7b-8k-Q4_K_S.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [LongQLoRA-Llama2-7b-8k-Q4_K_M.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [LongQLoRA-Llama2-7b-8k-Q5_0.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [LongQLoRA-Llama2-7b-8k-Q5_K_S.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [LongQLoRA-Llama2-7b-8k-Q5_K_M.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [LongQLoRA-Llama2-7b-8k-Q6_K.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [LongQLoRA-Llama2-7b-8k-Q8_0.gguf](https://huggingface.co/tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF/blob/main/LongQLoRA-Llama2-7b-8k-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF --include "LongQLoRA-Llama2-7b-8k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/YeungNLP_LongQLoRA-Llama2-7b-8k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ksmcg_Mistral-tiny-GGUF | tensorblock | 2025-06-19T01:57:01Z | 44 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:ksmcg/Mistral-tiny",
"base_model:quantized:ksmcg/Mistral-tiny",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T19:43:01Z | ---
base_model: ksmcg/Mistral-tiny
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ksmcg/Mistral-tiny - GGUF
This repo contains GGUF format model files for [ksmcg/Mistral-tiny](https://huggingface.co/ksmcg/Mistral-tiny).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-tiny-Q2_K.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q2_K.gguf) | Q2_K | 0.001 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-tiny-Q3_K_S.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q3_K_S.gguf) | Q3_K_S | 0.001 GB | very small, high quality loss |
| [Mistral-tiny-Q3_K_M.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q3_K_M.gguf) | Q3_K_M | 0.001 GB | very small, high quality loss |
| [Mistral-tiny-Q3_K_L.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q3_K_L.gguf) | Q3_K_L | 0.001 GB | small, substantial quality loss |
| [Mistral-tiny-Q4_0.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q4_0.gguf) | Q4_0 | 0.001 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-tiny-Q4_K_S.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q4_K_S.gguf) | Q4_K_S | 0.001 GB | small, greater quality loss |
| [Mistral-tiny-Q4_K_M.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q4_K_M.gguf) | Q4_K_M | 0.001 GB | medium, balanced quality - recommended |
| [Mistral-tiny-Q5_0.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q5_0.gguf) | Q5_0 | 0.001 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-tiny-Q5_K_S.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q5_K_S.gguf) | Q5_K_S | 0.001 GB | large, low quality loss - recommended |
| [Mistral-tiny-Q5_K_M.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q5_K_M.gguf) | Q5_K_M | 0.001 GB | large, very low quality loss - recommended |
| [Mistral-tiny-Q6_K.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q6_K.gguf) | Q6_K | 0.001 GB | very large, extremely low quality loss |
| [Mistral-tiny-Q8_0.gguf](https://huggingface.co/tensorblock/ksmcg_Mistral-tiny-GGUF/blob/main/Mistral-tiny-Q8_0.gguf) | Q8_0 | 0.001 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ksmcg_Mistral-tiny-GGUF --include "Mistral-tiny-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ksmcg_Mistral-tiny-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF | tensorblock | 2025-06-19T01:56:50Z | 42 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"base_model:kyujinpy/Kosy-platypus2-13B-v4",
"base_model:quantized:kyujinpy/Kosy-platypus2-13B-v4",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T17:30:51Z | ---
language:
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
tags:
- TensorBlock
- GGUF
base_model: kyujinpy/Kosy-platypus2-13B-v4
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## kyujinpy/Kosy-platypus2-13B-v4 - GGUF
This repo contains GGUF format model files for [kyujinpy/Kosy-platypus2-13B-v4](https://huggingface.co/kyujinpy/Kosy-platypus2-13B-v4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Kosy-platypus2-13B-v4-Q2_K.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [Kosy-platypus2-13B-v4-Q3_K_S.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [Kosy-platypus2-13B-v4-Q3_K_M.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [Kosy-platypus2-13B-v4-Q3_K_L.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [Kosy-platypus2-13B-v4-Q4_0.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Kosy-platypus2-13B-v4-Q4_K_S.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [Kosy-platypus2-13B-v4-Q4_K_M.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [Kosy-platypus2-13B-v4-Q5_0.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Kosy-platypus2-13B-v4-Q5_K_S.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [Kosy-platypus2-13B-v4-Q5_K_M.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [Kosy-platypus2-13B-v4-Q6_K.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [Kosy-platypus2-13B-v4-Q8_0.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF --include "Kosy-platypus2-13B-v4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF | tensorblock | 2025-06-19T01:56:35Z | 70 | 0 | null | [
"gguf",
"pretrained",
"flashback",
"web",
"conversational",
"TensorBlock",
"GGUF",
"text-generation",
"sv",
"en",
"no",
"da",
"base_model:timpal0l/Mistral-7B-v0.1-flashback-v2",
"base_model:quantized:timpal0l/Mistral-7B-v0.1-flashback-v2",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T14:13:28Z | ---
language:
- sv
- en
- 'no'
- da
license: mit
tags:
- pretrained
- flashback
- web
- conversational
- TensorBlock
- GGUF
models:
- timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
pipeline_tag: text-generation
widget:
- text: Jag tycker att det Γ€r roligt med
base_model: timpal0l/Mistral-7B-v0.1-flashback-v2
model-index:
- name: Mistral-7B-v0.1-flashback-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 57.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 40.66
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## timpal0l/Mistral-7B-v0.1-flashback-v2 - GGUF
This repo contains GGUF format model files for [timpal0l/Mistral-7B-v0.1-flashback-v2](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-v0.1-flashback-v2-Q2_K.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-v0.1-flashback-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-v0.1-flashback-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-v0.1-flashback-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-v0.1-flashback-v2-Q4_0.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-v0.1-flashback-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-v0.1-flashback-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-v0.1-flashback-v2-Q5_0.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-v0.1-flashback-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-v0.1-flashback-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-v0.1-flashback-v2-Q6_K.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-v0.1-flashback-v2-Q8_0.gguf](https://huggingface.co/tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF/blob/main/Mistral-7B-v0.1-flashback-v2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF --include "Mistral-7B-v0.1-flashback-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/timpal0l_Mistral-7B-v0.1-flashback-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF | tensorblock | 2025-06-19T01:56:15Z | 33 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:SicariusSicariiStuff/Tenebra_30B_Alpha01",
"base_model:quantized:SicariusSicariiStuff/Tenebra_30B_Alpha01",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T09:25:06Z | ---
language:
- en
tags:
- TensorBlock
- GGUF
base_model: SicariusSicariiStuff/Tenebra_30B_Alpha01
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## SicariusSicariiStuff/Tenebra_30B_Alpha01 - GGUF
This repo contains GGUF format model files for [SicariusSicariiStuff/Tenebra_30B_Alpha01](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Tenebra_30B_Alpha01-Q2_K.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q2_K.gguf) | Q2_K | 12.049 GB | smallest, significant quality loss - not recommended for most purposes |
| [Tenebra_30B_Alpha01-Q3_K_S.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q3_K_S.gguf) | Q3_K_S | 14.064 GB | very small, high quality loss |
| [Tenebra_30B_Alpha01-Q3_K_M.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q3_K_M.gguf) | Q3_K_M | 15.776 GB | very small, high quality loss |
| [Tenebra_30B_Alpha01-Q3_K_L.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q3_K_L.gguf) | Q3_K_L | 17.280 GB | small, substantial quality loss |
| [Tenebra_30B_Alpha01-Q4_0.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q4_0.gguf) | Q4_0 | 18.356 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Tenebra_30B_Alpha01-Q4_K_S.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q4_K_S.gguf) | Q4_K_S | 18.482 GB | small, greater quality loss |
| [Tenebra_30B_Alpha01-Q4_K_M.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q4_K_M.gguf) | Q4_K_M | 19.621 GB | medium, balanced quality - recommended |
| [Tenebra_30B_Alpha01-Q5_0.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q5_0.gguf) | Q5_0 | 22.395 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Tenebra_30B_Alpha01-Q5_K_S.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q5_K_S.gguf) | Q5_K_S | 22.395 GB | large, low quality loss - recommended |
| [Tenebra_30B_Alpha01-Q5_K_M.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q5_K_M.gguf) | Q5_K_M | 23.047 GB | large, very low quality loss - recommended |
| [Tenebra_30B_Alpha01-Q6_K.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q6_K.gguf) | Q6_K | 26.687 GB | very large, extremely low quality loss |
| [Tenebra_30B_Alpha01-Q8_0.gguf](https://huggingface.co/tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF/blob/main/Tenebra_30B_Alpha01-Q8_0.gguf) | Q8_0 | 34.565 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF --include "Tenebra_30B_Alpha01-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SicariusSicariiStuff_Tenebra_30B_Alpha01-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF | tensorblock | 2025-06-19T01:56:07Z | 79 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/samantha-data",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"base_model:cognitivecomputations/dolphin-2.2-yi-34b-200k",
"base_model:quantized:cognitivecomputations/dolphin-2.2-yi-34b-200k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T06:03:03Z | ---
language:
- en
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/samantha-data
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: cognitivecomputations/dolphin-2.2-yi-34b-200k
model-index:
- name: dolphin-2.2-yi-34b-200k
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 42.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 68.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 3.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## cognitivecomputations/dolphin-2.2-yi-34b-200k - GGUF
This repo contains GGUF format model files for [cognitivecomputations/dolphin-2.2-yi-34b-200k](https://huggingface.co/cognitivecomputations/dolphin-2.2-yi-34b-200k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [dolphin-2.2-yi-34b-200k-Q2_K.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes |
| [dolphin-2.2-yi-34b-200k-Q3_K_S.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss |
| [dolphin-2.2-yi-34b-200k-Q3_K_M.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss |
| [dolphin-2.2-yi-34b-200k-Q3_K_L.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss |
| [dolphin-2.2-yi-34b-200k-Q4_0.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dolphin-2.2-yi-34b-200k-Q4_K_S.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss |
| [dolphin-2.2-yi-34b-200k-Q4_K_M.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended |
| [dolphin-2.2-yi-34b-200k-Q5_0.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dolphin-2.2-yi-34b-200k-Q5_K_S.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended |
| [dolphin-2.2-yi-34b-200k-Q5_K_M.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended |
| [dolphin-2.2-yi-34b-200k-Q6_K.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss |
| [dolphin-2.2-yi-34b-200k-Q8_0.gguf](https://huggingface.co/tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF --include "dolphin-2.2-yi-34b-200k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/cognitivecomputations_dolphin-2.2-yi-34b-200k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF | tensorblock | 2025-06-19T01:55:59Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:Eric111/Yarn-Mistral-7b-128k-DPO",
"base_model:quantized:Eric111/Yarn-Mistral-7b-128k-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T04:26:14Z | ---
library_name: transformers
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: Eric111/Yarn-Mistral-7b-128k-DPO
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Eric111/Yarn-Mistral-7b-128k-DPO - GGUF
This repo contains GGUF format model files for [Eric111/Yarn-Mistral-7b-128k-DPO](https://huggingface.co/Eric111/Yarn-Mistral-7b-128k-DPO).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yarn-Mistral-7b-128k-DPO-Q2_K.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yarn-Mistral-7b-128k-DPO-Q3_K_S.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Yarn-Mistral-7b-128k-DPO-Q3_K_M.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Yarn-Mistral-7b-128k-DPO-Q3_K_L.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Yarn-Mistral-7b-128k-DPO-Q4_0.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yarn-Mistral-7b-128k-DPO-Q4_K_S.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Yarn-Mistral-7b-128k-DPO-Q4_K_M.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Yarn-Mistral-7b-128k-DPO-Q5_0.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yarn-Mistral-7b-128k-DPO-Q5_K_S.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Yarn-Mistral-7b-128k-DPO-Q5_K_M.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Yarn-Mistral-7b-128k-DPO-Q6_K.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Yarn-Mistral-7b-128k-DPO-Q8_0.gguf](https://huggingface.co/tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF/blob/main/Yarn-Mistral-7b-128k-DPO-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF --include "Yarn-Mistral-7b-128k-DPO-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Eric111_Yarn-Mistral-7b-128k-DPO-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF | tensorblock | 2025-06-19T01:55:55Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity",
"base_model:quantized:brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T01:45:39Z | ---
language:
- en
license: other
library_name: transformers
tags:
- text-generation-inference
- merge
- TensorBlock
- GGUF
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
model-index:
- name: CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity - GGUF
This repo contains GGUF format model files for [brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity](https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q2_K.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q3_K_S.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q3_K_M.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q3_K_L.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q4_0.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q4_K_S.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q4_K_M.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q5_0.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q5_K_S.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q5_K_M.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q6_K.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q8_0.gguf](https://huggingface.co/tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF --include "CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/brucethemoose_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF | tensorblock | 2025-06-19T01:55:23Z | 57 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"base_model:Unbabel/TowerInstruct-13B-v0.1",
"base_model:quantized:Unbabel/TowerInstruct-13B-v0.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | translation | 2025-04-30T17:40:22Z | ---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
tags:
- TensorBlock
- GGUF
base_model: Unbabel/TowerInstruct-13B-v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Unbabel/TowerInstruct-13B-v0.1 - GGUF
This repo contains GGUF format model files for [Unbabel/TowerInstruct-13B-v0.1](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TowerInstruct-13B-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [TowerInstruct-13B-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [TowerInstruct-13B-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [TowerInstruct-13B-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [TowerInstruct-13B-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TowerInstruct-13B-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [TowerInstruct-13B-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [TowerInstruct-13B-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TowerInstruct-13B-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [TowerInstruct-13B-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [TowerInstruct-13B-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [TowerInstruct-13B-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF --include "TowerInstruct-13B-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF | tensorblock | 2025-06-19T01:54:53Z | 34 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"BioMistral/BioMistral-7B",
"TensorBlock",
"GGUF",
"base_model:rangan2510/BioMistral-Instructv0.2-7B-DARE",
"base_model:quantized:rangan2510/BioMistral-Instructv0.2-7B-DARE",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T11:52:29Z | ---
tags:
- merge
- mergekit
- lazymergekit
- BioMistral/BioMistral-7B
- TensorBlock
- GGUF
base_model: rangan2510/BioMistral-Instructv0.2-7B-DARE
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## rangan2510/BioMistral-Instructv0.2-7B-DARE - GGUF
This repo contains GGUF format model files for [rangan2510/BioMistral-Instructv0.2-7B-DARE](https://huggingface.co/rangan2510/BioMistral-Instructv0.2-7B-DARE).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [BioMistral-Instructv0.2-7B-DARE-Q2_K.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [BioMistral-Instructv0.2-7B-DARE-Q3_K_S.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [BioMistral-Instructv0.2-7B-DARE-Q3_K_M.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [BioMistral-Instructv0.2-7B-DARE-Q3_K_L.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [BioMistral-Instructv0.2-7B-DARE-Q4_0.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [BioMistral-Instructv0.2-7B-DARE-Q4_K_S.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [BioMistral-Instructv0.2-7B-DARE-Q4_K_M.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [BioMistral-Instructv0.2-7B-DARE-Q5_0.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [BioMistral-Instructv0.2-7B-DARE-Q5_K_S.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [BioMistral-Instructv0.2-7B-DARE-Q5_K_M.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [BioMistral-Instructv0.2-7B-DARE-Q6_K.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [BioMistral-Instructv0.2-7B-DARE-Q8_0.gguf](https://huggingface.co/tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF/blob/main/BioMistral-Instructv0.2-7B-DARE-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF --include "BioMistral-Instructv0.2-7B-DARE-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/rangan2510_BioMistral-Instructv0.2-7B-DARE-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF | tensorblock | 2025-06-19T01:54:51Z | 24 | 0 | null | [
"gguf",
"finetuned",
"TensorBlock",
"GGUF",
"text-generation",
"license:apache-2.0",
"model-index",
"region:us",
"conversational"
] | text-generation | 2025-04-30T11:52:04Z | ---
license: apache-2.0
tags:
- finetuned
- TensorBlock
- GGUF
pipeline_tag: text-generation
inference: false
base_model: notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
model-index:
- name: Mistral-7B-Instruct-v0.2-attention-sparsity-30
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.97
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30 - GGUF
This repo contains GGUF format model files for [notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30](https://huggingface.co/notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q2_K.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q3_K_S.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q3_K_M.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q3_K_L.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q4_0.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q4_K_S.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q4_K_M.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q5_0.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q5_K_S.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q5_K_M.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q6_K.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q8_0.gguf](https://huggingface.co/tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF/blob/main/Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF --include "Mistral-7B-Instruct-v0.2-attention-sparsity-30-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/notadib_Mistral-7B-Instruct-v0.2-attention-sparsity-30-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/lex-hue_Delexa-V0.1-7b-GGUF | tensorblock | 2025-06-19T01:54:43Z | 13 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:lex-hue/Delexa-V0.1-7b",
"base_model:quantized:lex-hue/Delexa-V0.1-7b",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T08:12:36Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: lex-hue/Delexa-V0.1-7b
model-index:
- name: Delexa-V0.1-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.69
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## lex-hue/Delexa-V0.1-7b - GGUF
This repo contains GGUF format model files for [lex-hue/Delexa-V0.1-7b](https://huggingface.co/lex-hue/Delexa-V0.1-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Delexa-V0.1-7b-Q2_K.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Delexa-V0.1-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Delexa-V0.1-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Delexa-V0.1-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Delexa-V0.1-7b-Q4_0.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Delexa-V0.1-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Delexa-V0.1-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Delexa-V0.1-7b-Q5_0.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Delexa-V0.1-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Delexa-V0.1-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Delexa-V0.1-7b-Q6_K.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Delexa-V0.1-7b-Q8_0.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/lex-hue_Delexa-V0.1-7b-GGUF --include "Delexa-V0.1-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/lex-hue_Delexa-V0.1-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF | tensorblock | 2025-06-19T01:54:30Z | 96 | 0 | null | [
"gguf",
"text-generation-inference",
"TensorBlock",
"GGUF",
"translation",
"de",
"en",
"base_model:Samvardhan777/gemma-2b-mt-German-to-English",
"base_model:quantized:Samvardhan777/gemma-2b-mt-German-to-English",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | translation | 2025-04-30T05:32:03Z | ---
license: mit
language:
- de
- en
pipeline_tag: translation
tags:
- text-generation-inference
- TensorBlock
- GGUF
base_model: Samvardhan777/gemma-2b-mt-German-to-English
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Samvardhan777/gemma-2b-mt-German-to-English - GGUF
This repo contains GGUF format model files for [Samvardhan777/gemma-2b-mt-German-to-English](https://huggingface.co/Samvardhan777/gemma-2b-mt-German-to-English).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-2b-mt-German-to-English-Q2_K.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q2_K.gguf) | Q2_K | 1.158 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma-2b-mt-German-to-English-Q3_K_S.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q3_K_S.gguf) | Q3_K_S | 1.288 GB | very small, high quality loss |
| [gemma-2b-mt-German-to-English-Q3_K_M.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q3_K_M.gguf) | Q3_K_M | 1.384 GB | very small, high quality loss |
| [gemma-2b-mt-German-to-English-Q3_K_L.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q3_K_L.gguf) | Q3_K_L | 1.466 GB | small, substantial quality loss |
| [gemma-2b-mt-German-to-English-Q4_0.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q4_0.gguf) | Q4_0 | 1.551 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma-2b-mt-German-to-English-Q4_K_S.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q4_K_S.gguf) | Q4_K_S | 1.560 GB | small, greater quality loss |
| [gemma-2b-mt-German-to-English-Q4_K_M.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q4_K_M.gguf) | Q4_K_M | 1.630 GB | medium, balanced quality - recommended |
| [gemma-2b-mt-German-to-English-Q5_0.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q5_0.gguf) | Q5_0 | 1.799 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma-2b-mt-German-to-English-Q5_K_S.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q5_K_S.gguf) | Q5_K_S | 1.799 GB | large, low quality loss - recommended |
| [gemma-2b-mt-German-to-English-Q5_K_M.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q5_K_M.gguf) | Q5_K_M | 1.840 GB | large, very low quality loss - recommended |
| [gemma-2b-mt-German-to-English-Q6_K.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q6_K.gguf) | Q6_K | 2.062 GB | very large, extremely low quality loss |
| [gemma-2b-mt-German-to-English-Q8_0.gguf](https://huggingface.co/tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF/blob/main/gemma-2b-mt-German-to-English-Q8_0.gguf) | Q8_0 | 2.669 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF --include "gemma-2b-mt-German-to-English-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Samvardhan777_gemma-2b-mt-German-to-English-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF | tensorblock | 2025-06-19T01:54:27Z | 14 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:KnutJaegersberg/Llama-3-Deita-8b",
"base_model:quantized:KnutJaegersberg/Llama-3-Deita-8b",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T05:25:30Z | ---
license: llama3
tags:
- TensorBlock
- GGUF
base_model: KnutJaegersberg/Llama-3-Deita-8b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## KnutJaegersberg/Llama-3-Deita-8b - GGUF
This repo contains GGUF format model files for [KnutJaegersberg/Llama-3-Deita-8b](https://huggingface.co/KnutJaegersberg/Llama-3-Deita-8b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-Deita-8b-Q2_K.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-Deita-8b-Q3_K_S.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [Llama-3-Deita-8b-Q3_K_M.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3-Deita-8b-Q3_K_L.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3-Deita-8b-Q4_0.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-Deita-8b-Q4_K_S.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3-Deita-8b-Q4_K_M.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3-Deita-8b-Q5_0.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-Deita-8b-Q5_K_S.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3-Deita-8b-Q5_K_M.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3-Deita-8b-Q6_K.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3-Deita-8b-Q8_0.gguf](https://huggingface.co/tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF/blob/main/Llama-3-Deita-8b-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF --include "Llama-3-Deita-8b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/KnutJaegersberg_Llama-3-Deita-8b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF | tensorblock | 2025-06-19T01:54:04Z | 140 | 0 | null | [
"gguf",
"instruction-finetuning",
"TensorBlock",
"GGUF",
"en",
"base_model:IAAR-Shanghai/xFinder-qwen1505",
"base_model:quantized:IAAR-Shanghai/xFinder-qwen1505",
"license:cc-by-nc-nd-4.0",
"region:us",
"conversational"
] | null | 2025-04-30T00:03:38Z | ---
inference: false
language:
- en
tags:
- instruction-finetuning
- TensorBlock
- GGUF
task_categories:
- text-generation
license: cc-by-nc-nd-4.0
base_model: IAAR-Shanghai/xFinder-qwen1505
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## IAAR-Shanghai/xFinder-qwen1505 - GGUF
This repo contains GGUF format model files for [IAAR-Shanghai/xFinder-qwen1505](https://huggingface.co/IAAR-Shanghai/xFinder-qwen1505).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [xFinder-qwen1505-Q2_K.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q2_K.gguf) | Q2_K | 0.298 GB | smallest, significant quality loss - not recommended for most purposes |
| [xFinder-qwen1505-Q3_K_S.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q3_K_S.gguf) | Q3_K_S | 0.333 GB | very small, high quality loss |
| [xFinder-qwen1505-Q3_K_M.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q3_K_M.gguf) | Q3_K_M | 0.350 GB | very small, high quality loss |
| [xFinder-qwen1505-Q3_K_L.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q3_K_L.gguf) | Q3_K_L | 0.364 GB | small, substantial quality loss |
| [xFinder-qwen1505-Q4_0.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q4_0.gguf) | Q4_0 | 0.395 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [xFinder-qwen1505-Q4_K_S.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q4_K_S.gguf) | Q4_K_S | 0.397 GB | small, greater quality loss |
| [xFinder-qwen1505-Q4_K_M.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q4_K_M.gguf) | Q4_K_M | 0.407 GB | medium, balanced quality - recommended |
| [xFinder-qwen1505-Q5_0.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q5_0.gguf) | Q5_0 | 0.453 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [xFinder-qwen1505-Q5_K_S.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q5_K_S.gguf) | Q5_K_S | 0.453 GB | large, low quality loss - recommended |
| [xFinder-qwen1505-Q5_K_M.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q5_K_M.gguf) | Q5_K_M | 0.459 GB | large, very low quality loss - recommended |
| [xFinder-qwen1505-Q6_K.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q6_K.gguf) | Q6_K | 0.515 GB | very large, extremely low quality loss |
| [xFinder-qwen1505-Q8_0.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q8_0.gguf) | Q8_0 | 0.665 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF --include "xFinder-qwen1505-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Kukedlc_LLama-3-8b-Python-GGUF | tensorblock | 2025-06-19T01:53:37Z | 93 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Kukedlc/LLama-3-8b-Python",
"base_model:quantized:Kukedlc/LLama-3-8b-Python",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T19:31:55Z | ---
license: other
tags:
- TensorBlock
- GGUF
base_model: Kukedlc/LLama-3-8b-Python
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Kukedlc/LLama-3-8b-Python - GGUF
This repo contains GGUF format model files for [Kukedlc/LLama-3-8b-Python](https://huggingface.co/Kukedlc/LLama-3-8b-Python).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [LLama-3-8b-Python-Q2_K.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [LLama-3-8b-Python-Q3_K_S.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [LLama-3-8b-Python-Q3_K_M.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [LLama-3-8b-Python-Q3_K_L.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [LLama-3-8b-Python-Q4_0.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [LLama-3-8b-Python-Q4_K_S.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [LLama-3-8b-Python-Q4_K_M.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [LLama-3-8b-Python-Q5_0.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [LLama-3-8b-Python-Q5_K_S.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [LLama-3-8b-Python-Q5_K_M.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [LLama-3-8b-Python-Q6_K.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [LLama-3-8b-Python-Q8_0.gguf](https://huggingface.co/tensorblock/Kukedlc_LLama-3-8b-Python-GGUF/blob/main/LLama-3-8b-Python-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Kukedlc_LLama-3-8b-Python-GGUF --include "LLama-3-8b-Python-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Kukedlc_LLama-3-8b-Python-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF | tensorblock | 2025-06-19T01:53:08Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"translation",
"enko",
"ko",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"base_model:nayohan/llama3-8b-it-translation-sharegpt-en-ko",
"base_model:quantized:nayohan/llama3-8b-it-translation-sharegpt-en-ko",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T14:10:31Z | ---
language:
- en
- ko
license: llama3
library_name: transformers
tags:
- translation
- enko
- ko
- TensorBlock
- GGUF
base_model: nayohan/llama3-8b-it-translation-sharegpt-en-ko
datasets:
- squarelike/sharegpt_deepl_ko_translation
pipeline_tag: text-generation
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## nayohan/llama3-8b-it-translation-sharegpt-en-ko - GGUF
This repo contains GGUF format model files for [nayohan/llama3-8b-it-translation-sharegpt-en-ko](https://huggingface.co/nayohan/llama3-8b-it-translation-sharegpt-en-ko).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama3-8b-it-translation-sharegpt-en-ko-Q2_K.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama3-8b-it-translation-sharegpt-en-ko-Q3_K_S.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [llama3-8b-it-translation-sharegpt-en-ko-Q3_K_M.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [llama3-8b-it-translation-sharegpt-en-ko-Q3_K_L.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [llama3-8b-it-translation-sharegpt-en-ko-Q4_0.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama3-8b-it-translation-sharegpt-en-ko-Q4_K_S.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [llama3-8b-it-translation-sharegpt-en-ko-Q4_K_M.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [llama3-8b-it-translation-sharegpt-en-ko-Q5_0.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama3-8b-it-translation-sharegpt-en-ko-Q5_K_S.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [llama3-8b-it-translation-sharegpt-en-ko-Q5_K_M.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [llama3-8b-it-translation-sharegpt-en-ko-Q6_K.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [llama3-8b-it-translation-sharegpt-en-ko-Q8_0.gguf](https://huggingface.co/tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF/blob/main/llama3-8b-it-translation-sharegpt-en-ko-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF --include "llama3-8b-it-translation-sharegpt-en-ko-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/nayohan_llama3-8b-it-translation-sharegpt-en-ko-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF | tensorblock | 2025-06-19T01:53:02Z | 57 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"nl",
"dataset:BramVanroy/ultrachat_200k_dutch",
"dataset:BramVanroy/stackoverflow-chat-dutch",
"dataset:BramVanroy/alpaca-cleaned-dutch",
"dataset:BramVanroy/dolly-15k-dutch",
"dataset:BramVanroy/no_robots_dutch",
"dataset:BramVanroy/ultra_feedback_dutch",
"base_model:ChocoLlama/ChocoLlama-2-7B-instruct",
"base_model:quantized:ChocoLlama/ChocoLlama-2-7B-instruct",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T12:45:32Z | ---
language:
- nl
license: cc-by-nc-4.0
base_model: ChocoLlama/ChocoLlama-2-7B-instruct
datasets:
- BramVanroy/ultrachat_200k_dutch
- BramVanroy/stackoverflow-chat-dutch
- BramVanroy/alpaca-cleaned-dutch
- BramVanroy/dolly-15k-dutch
- BramVanroy/no_robots_dutch
- BramVanroy/ultra_feedback_dutch
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ChocoLlama/ChocoLlama-2-7B-instruct - GGUF
This repo contains GGUF format model files for [ChocoLlama/ChocoLlama-2-7B-instruct](https://huggingface.co/ChocoLlama/ChocoLlama-2-7B-instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ChocoLlama-2-7B-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [ChocoLlama-2-7B-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [ChocoLlama-2-7B-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [ChocoLlama-2-7B-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [ChocoLlama-2-7B-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ChocoLlama-2-7B-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [ChocoLlama-2-7B-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [ChocoLlama-2-7B-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ChocoLlama-2-7B-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [ChocoLlama-2-7B-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [ChocoLlama-2-7B-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [ChocoLlama-2-7B-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF/blob/main/ChocoLlama-2-7B-instruct-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF --include "ChocoLlama-2-7B-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ChocoLlama_ChocoLlama-2-7B-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF | tensorblock | 2025-06-19T01:52:55Z | 7 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:adamo1139/Llama-3-8B-AEZAKMI-run1",
"base_model:quantized:adamo1139/Llama-3-8B-AEZAKMI-run1",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T11:36:19Z | ---
license: other
license_name: llama3
license_link: LICENSE
tags:
- TensorBlock
- GGUF
base_model: adamo1139/Llama-3-8B-AEZAKMI-run1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## adamo1139/Llama-3-8B-AEZAKMI-run1 - GGUF
This repo contains GGUF format model files for [adamo1139/Llama-3-8B-AEZAKMI-run1](https://huggingface.co/adamo1139/Llama-3-8B-AEZAKMI-run1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-AEZAKMI-run1-Q2_K.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-8B-AEZAKMI-run1-Q3_K_S.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [Llama-3-8B-AEZAKMI-run1-Q3_K_M.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3-8B-AEZAKMI-run1-Q3_K_L.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3-8B-AEZAKMI-run1-Q4_0.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-8B-AEZAKMI-run1-Q4_K_S.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3-8B-AEZAKMI-run1-Q4_K_M.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3-8B-AEZAKMI-run1-Q5_0.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-8B-AEZAKMI-run1-Q5_K_S.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3-8B-AEZAKMI-run1-Q5_K_M.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3-8B-AEZAKMI-run1-Q6_K.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3-8B-AEZAKMI-run1-Q8_0.gguf](https://huggingface.co/tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF/blob/main/Llama-3-8B-AEZAKMI-run1-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF --include "Llama-3-8B-AEZAKMI-run1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/adamo1139_Llama-3-8B-AEZAKMI-run1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ResplendentAI_Aura_L3_8B-GGUF | tensorblock | 2025-06-19T01:52:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:ResplendentAI/Aura_L3_8B",
"base_model:quantized:ResplendentAI/Aura_L3_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T04:34:31Z | ---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- TensorBlock
- GGUF
base_model: ResplendentAI/Aura_L3_8B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ResplendentAI/Aura_L3_8B - GGUF
This repo contains GGUF format model files for [ResplendentAI/Aura_L3_8B](https://huggingface.co/ResplendentAI/Aura_L3_8B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Aura_L3_8B-Q2_K.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Aura_L3_8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [Aura_L3_8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Aura_L3_8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Aura_L3_8B-Q4_0.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Aura_L3_8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Aura_L3_8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Aura_L3_8B-Q5_0.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Aura_L3_8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Aura_L3_8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Aura_L3_8B-Q6_K.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Aura_L3_8B-Q8_0.gguf](https://huggingface.co/tensorblock/ResplendentAI_Aura_L3_8B-GGUF/blob/main/Aura_L3_8B-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ResplendentAI_Aura_L3_8B-GGUF --include "Aura_L3_8B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ResplendentAI_Aura_L3_8B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/netcat420_MFANNv0.15.10-GGUF | tensorblock | 2025-06-19T01:52:05Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:netcat420/MFANNv0.15.10",
"base_model:quantized:netcat420/MFANNv0.15.10",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T02:03:00Z | ---
base_model: netcat420/MFANNv0.15.10
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## netcat420/MFANNv0.15.10 - GGUF
This repo contains GGUF format model files for [netcat420/MFANNv0.15.10](https://huggingface.co/netcat420/MFANNv0.15.10).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MFANNv0.15.10-Q2_K.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [MFANNv0.15.10-Q3_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [MFANNv0.15.10-Q3_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [MFANNv0.15.10-Q3_K_L.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [MFANNv0.15.10-Q4_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MFANNv0.15.10-Q4_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [MFANNv0.15.10-Q4_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [MFANNv0.15.10-Q5_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MFANNv0.15.10-Q5_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [MFANNv0.15.10-Q5_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [MFANNv0.15.10-Q6_K.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [MFANNv0.15.10-Q8_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.15.10-GGUF/blob/main/MFANNv0.15.10-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/netcat420_MFANNv0.15.10-GGUF --include "MFANNv0.15.10-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/netcat420_MFANNv0.15.10-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF | tensorblock | 2025-06-19T01:51:39Z | 22 | 0 | null | [
"gguf",
"trl",
"sft",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mnoukhov/pythia1b-sft-tldr",
"base_model:quantized:mnoukhov/pythia1b-sft-tldr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T23:03:09Z | ---
license: apache-2.0
base_model: mnoukhov/pythia1b-sft-tldr
tags:
- trl
- sft
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: pythia1b-sft-tldr
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mnoukhov/pythia1b-sft-tldr - GGUF
This repo contains GGUF format model files for [mnoukhov/pythia1b-sft-tldr](https://huggingface.co/mnoukhov/pythia1b-sft-tldr).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [pythia1b-sft-tldr-Q2_K.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q2_K.gguf) | Q2_K | 0.420 GB | smallest, significant quality loss - not recommended for most purposes |
| [pythia1b-sft-tldr-Q3_K_S.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q3_K_S.gguf) | Q3_K_S | 0.478 GB | very small, high quality loss |
| [pythia1b-sft-tldr-Q3_K_M.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q3_K_M.gguf) | Q3_K_M | 0.552 GB | very small, high quality loss |
| [pythia1b-sft-tldr-Q3_K_L.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [pythia1b-sft-tldr-Q4_0.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q4_0.gguf) | Q4_0 | 0.599 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [pythia1b-sft-tldr-Q4_K_S.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q4_K_S.gguf) | Q4_K_S | 0.603 GB | small, greater quality loss |
| [pythia1b-sft-tldr-Q4_K_M.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q4_K_M.gguf) | Q4_K_M | 0.659 GB | medium, balanced quality - recommended |
| [pythia1b-sft-tldr-Q5_0.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q5_0.gguf) | Q5_0 | 0.712 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [pythia1b-sft-tldr-Q5_K_S.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q5_K_S.gguf) | Q5_K_S | 0.712 GB | large, low quality loss - recommended |
| [pythia1b-sft-tldr-Q5_K_M.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q5_K_M.gguf) | Q5_K_M | 0.757 GB | large, very low quality loss - recommended |
| [pythia1b-sft-tldr-Q6_K.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q6_K.gguf) | Q6_K | 0.833 GB | very large, extremely low quality loss |
| [pythia1b-sft-tldr-Q8_0.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF/blob/main/pythia1b-sft-tldr-Q8_0.gguf) | Q8_0 | 1.078 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF --include "pythia1b-sft-tldr-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mnoukhov_pythia1b-sft-tldr-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/netcat420_MFANNv0.16.10-GGUF | tensorblock | 2025-06-19T01:51:03Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:netcat420/MFANNv0.16.10",
"base_model:quantized:netcat420/MFANNv0.16.10",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T08:42:56Z | ---
base_model: netcat420/MFANNv0.16.10
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## netcat420/MFANNv0.16.10 - GGUF
This repo contains GGUF format model files for [netcat420/MFANNv0.16.10](https://huggingface.co/netcat420/MFANNv0.16.10).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MFANNv0.16.10-Q2_K.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [MFANNv0.16.10-Q3_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [MFANNv0.16.10-Q3_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [MFANNv0.16.10-Q3_K_L.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [MFANNv0.16.10-Q4_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MFANNv0.16.10-Q4_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [MFANNv0.16.10-Q4_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [MFANNv0.16.10-Q5_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MFANNv0.16.10-Q5_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [MFANNv0.16.10-Q5_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [MFANNv0.16.10-Q6_K.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [MFANNv0.16.10-Q8_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/netcat420_MFANNv0.16.10-GGUF --include "MFANNv0.16.10-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/netcat420_MFANNv0.16.10-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF | tensorblock | 2025-06-19T01:50:59Z | 55 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:OpenVINO/neural-chat-7b-v3-3-int4-ov",
"base_model:quantized:OpenVINO/neural-chat-7b-v3-3-int4-ov",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:13:57Z | ---
license: apache-2.0
license_link: https://choosealicense.com/licenses/apache-2.0/
base_model: OpenVINO/neural-chat-7b-v3-3-int4-ov
base_model_relation: quantized
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## OpenVINO/neural-chat-7b-v3-3-int4-ov - GGUF
This repo contains GGUF format model files for [OpenVINO/neural-chat-7b-v3-3-int4-ov](https://huggingface.co/OpenVINO/neural-chat-7b-v3-3-int4-ov).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [neural-chat-7b-v3-3-int4-ov-Q2_K.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q2_K.gguf) | Q2_K | 0.001 GB | smallest, significant quality loss - not recommended for most purposes |
| [neural-chat-7b-v3-3-int4-ov-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q3_K_S.gguf) | Q3_K_S | 0.001 GB | very small, high quality loss |
| [neural-chat-7b-v3-3-int4-ov-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q3_K_M.gguf) | Q3_K_M | 0.001 GB | very small, high quality loss |
| [neural-chat-7b-v3-3-int4-ov-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q3_K_L.gguf) | Q3_K_L | 0.001 GB | small, substantial quality loss |
| [neural-chat-7b-v3-3-int4-ov-Q4_0.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q4_0.gguf) | Q4_0 | 0.001 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [neural-chat-7b-v3-3-int4-ov-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q4_K_S.gguf) | Q4_K_S | 0.001 GB | small, greater quality loss |
| [neural-chat-7b-v3-3-int4-ov-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q4_K_M.gguf) | Q4_K_M | 0.001 GB | medium, balanced quality - recommended |
| [neural-chat-7b-v3-3-int4-ov-Q5_0.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q5_0.gguf) | Q5_0 | 0.001 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [neural-chat-7b-v3-3-int4-ov-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q5_K_S.gguf) | Q5_K_S | 0.001 GB | large, low quality loss - recommended |
| [neural-chat-7b-v3-3-int4-ov-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q5_K_M.gguf) | Q5_K_M | 0.001 GB | large, very low quality loss - recommended |
| [neural-chat-7b-v3-3-int4-ov-Q6_K.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q6_K.gguf) | Q6_K | 0.001 GB | very large, extremely low quality loss |
| [neural-chat-7b-v3-3-int4-ov-Q8_0.gguf](https://huggingface.co/tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF/blob/main/neural-chat-7b-v3-3-int4-ov-Q8_0.gguf) | Q8_0 | 0.001 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF --include "neural-chat-7b-v3-3-int4-ov-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenVINO_neural-chat-7b-v3-3-int4-ov-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF | tensorblock | 2025-06-19T01:50:42Z | 98 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:allenai/dolma",
"dataset:allenai/tulu-v2-sft-mixture-olmo-4096",
"base_model:hamishivi/OLMo-1B-0724-SFT-hf",
"base_model:quantized:hamishivi/OLMo-1B-0724-SFT-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T07:03:50Z | ---
license: apache-2.0
datasets:
- allenai/dolma
- allenai/tulu-v2-sft-mixture-olmo-4096
language:
- en
tags:
- TensorBlock
- GGUF
base_model: hamishivi/OLMo-1B-0724-SFT-hf
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## hamishivi/OLMo-1B-0724-SFT-hf - GGUF
This repo contains GGUF format model files for [hamishivi/OLMo-1B-0724-SFT-hf](https://huggingface.co/hamishivi/OLMo-1B-0724-SFT-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|endoftext|><|user|>
{prompt}
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OLMo-1B-0724-SFT-hf-Q2_K.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q2_K.gguf) | Q2_K | 0.513 GB | smallest, significant quality loss - not recommended for most purposes |
| [OLMo-1B-0724-SFT-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q3_K_S.gguf) | Q3_K_S | 0.592 GB | very small, high quality loss |
| [OLMo-1B-0724-SFT-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q3_K_M.gguf) | Q3_K_M | 0.649 GB | very small, high quality loss |
| [OLMo-1B-0724-SFT-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q3_K_L.gguf) | Q3_K_L | 0.696 GB | small, substantial quality loss |
| [OLMo-1B-0724-SFT-hf-Q4_0.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q4_0.gguf) | Q4_0 | 0.748 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OLMo-1B-0724-SFT-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q4_K_S.gguf) | Q4_K_S | 0.755 GB | small, greater quality loss |
| [OLMo-1B-0724-SFT-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q4_K_M.gguf) | Q4_K_M | 0.791 GB | medium, balanced quality - recommended |
| [OLMo-1B-0724-SFT-hf-Q5_0.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q5_0.gguf) | Q5_0 | 0.895 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OLMo-1B-0724-SFT-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q5_K_S.gguf) | Q5_K_S | 0.895 GB | large, low quality loss - recommended |
| [OLMo-1B-0724-SFT-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q5_K_M.gguf) | Q5_K_M | 0.918 GB | large, very low quality loss - recommended |
| [OLMo-1B-0724-SFT-hf-Q6_K.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q6_K.gguf) | Q6_K | 1.052 GB | very large, extremely low quality loss |
| [OLMo-1B-0724-SFT-hf-Q8_0.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q8_0.gguf) | Q8_0 | 1.362 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF --include "OLMo-1B-0724-SFT-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Value4AI_ValueLlama-3-8B-GGUF | tensorblock | 2025-06-19T01:50:00Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"TensorBlock",
"GGUF",
"en",
"dataset:allenai/ValuePrism",
"dataset:Value4AI/ValueBench",
"base_model:Value4AI/ValueLlama-3-8B",
"base_model:quantized:Value4AI/ValueLlama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T23:43:03Z | ---
library_name: transformers
tags:
- llama-factory
- TensorBlock
- GGUF
license: llama3
datasets:
- allenai/ValuePrism
- Value4AI/ValueBench
language:
- en
base_model: Value4AI/ValueLlama-3-8B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Value4AI/ValueLlama-3-8B - GGUF
This repo contains GGUF format model files for [Value4AI/ValueLlama-3-8B](https://huggingface.co/Value4AI/ValueLlama-3-8B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ValueLlama-3-8B-Q2_K.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [ValueLlama-3-8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [ValueLlama-3-8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [ValueLlama-3-8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [ValueLlama-3-8B-Q4_0.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ValueLlama-3-8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [ValueLlama-3-8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [ValueLlama-3-8B-Q5_0.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ValueLlama-3-8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [ValueLlama-3-8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [ValueLlama-3-8B-Q6_K.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [ValueLlama-3-8B-Q8_0.gguf](https://huggingface.co/tensorblock/Value4AI_ValueLlama-3-8B-GGUF/blob/main/ValueLlama-3-8B-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Value4AI_ValueLlama-3-8B-GGUF --include "ValueLlama-3-8B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Value4AI_ValueLlama-3-8B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF | tensorblock | 2025-06-19T01:49:52Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b",
"base_model:quantized:mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T20:54:54Z | ---
library_name: transformers
license: other
base_model: mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: mlfoundations-dev_code-stratos-unverified-scaled-0.25_stratos_7b
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b](https://huggingface.co/mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q2_K.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q6_K.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q8_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF --include "mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF | tensorblock | 2025-06-19T01:49:08Z | 147 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:SakanaAI/Llama-3-8B-Instruct-Coding-Expert",
"base_model:quantized:SakanaAI/Llama-3-8B-Instruct-Coding-Expert",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-27T13:34:15Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: llama3
model_type: llama
tags:
- TensorBlock
- GGUF
base_model: SakanaAI/Llama-3-8B-Instruct-Coding-Expert
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## SakanaAI/Llama-3-8B-Instruct-Coding-Expert - GGUF
This repo contains GGUF format model files for [SakanaAI/Llama-3-8B-Instruct-Coding-Expert](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-Coding-Expert).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-Instruct-Coding-Expert-Q2_K.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-8B-Instruct-Coding-Expert-Q3_K_S.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [Llama-3-8B-Instruct-Coding-Expert-Q3_K_M.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3-8B-Instruct-Coding-Expert-Q3_K_L.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3-8B-Instruct-Coding-Expert-Q4_0.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-8B-Instruct-Coding-Expert-Q4_K_S.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3-8B-Instruct-Coding-Expert-Q4_K_M.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3-8B-Instruct-Coding-Expert-Q5_0.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-8B-Instruct-Coding-Expert-Q5_K_S.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3-8B-Instruct-Coding-Expert-Q5_K_M.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3-8B-Instruct-Coding-Expert-Q6_K.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3-8B-Instruct-Coding-Expert-Q8_0.gguf](https://huggingface.co/tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF/blob/main/Llama-3-8B-Instruct-Coding-Expert-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF --include "Llama-3-8B-Instruct-Coding-Expert-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SakanaAI_Llama-3-8B-Instruct-Coding-Expert-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF | tensorblock | 2025-06-19T01:49:05Z | 16 | 0 | null | [
"gguf",
"llama-3.1",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/openbuddy-llama3.1-8b-v22.1-131k",
"base_model:quantized:OpenBuddy/openbuddy-llama3.1-8b-v22.1-131k",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-27T12:50:27Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
pipeline_tag: text-generation
tags:
- llama-3.1
- TensorBlock
- GGUF
license: other
license_name: llama3.1
license_link: https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE
base_model: OpenBuddy/openbuddy-llama3.1-8b-v22.1-131k
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## OpenBuddy/openbuddy-llama3.1-8b-v22.1-131k - GGUF
This repo contains GGUF format model files for [OpenBuddy/openbuddy-llama3.1-8b-v22.1-131k](https://huggingface.co/OpenBuddy/openbuddy-llama3.1-8b-v22.1-131k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|role|>system<|says|>{system_prompt}<|end|>
<|role|>user<|says|>{prompt}<|end|>
<|role|>assistant<|says|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [openbuddy-llama3.1-8b-v22.1-131k-Q2_K.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-llama3.1-8b-v22.1-131k-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [openbuddy-llama3.1-8b-v22.1-131k-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [openbuddy-llama3.1-8b-v22.1-131k-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [openbuddy-llama3.1-8b-v22.1-131k-Q4_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-llama3.1-8b-v22.1-131k-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [openbuddy-llama3.1-8b-v22.1-131k-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [openbuddy-llama3.1-8b-v22.1-131k-Q5_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-llama3.1-8b-v22.1-131k-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [openbuddy-llama3.1-8b-v22.1-131k-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [openbuddy-llama3.1-8b-v22.1-131k-Q6_K.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [openbuddy-llama3.1-8b-v22.1-131k-Q8_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF/blob/main/openbuddy-llama3.1-8b-v22.1-131k-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF --include "openbuddy-llama3.1-8b-v22.1-131k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenBuddy_openbuddy-llama3.1-8b-v22.1-131k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF | tensorblock | 2025-06-19T01:49:02Z | 58 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"12b",
"chat",
"roleplay",
"creative-writing",
"SLERP",
"TensorBlock",
"GGUF",
"base_model:redrix/patricide-12B-Unslop-Mell",
"base_model:quantized:redrix/patricide-12B-Unslop-Mell",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T12:31:38Z | ---
base_model: redrix/patricide-12B-Unslop-Mell
library_name: transformers
tags:
- mergekit
- merge
- 12b
- chat
- roleplay
- creative-writing
- SLERP
- TensorBlock
- GGUF
license: apache-2.0
new_version: redrix/patricide-12B-Unslop-Mell-v2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## redrix/patricide-12B-Unslop-Mell - GGUF
This repo contains GGUF format model files for [redrix/patricide-12B-Unslop-Mell](https://huggingface.co/redrix/patricide-12B-Unslop-Mell).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [patricide-12B-Unslop-Mell-Q2_K.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q2_K.gguf) | Q2_K | 4.791 GB | smallest, significant quality loss - not recommended for most purposes |
| [patricide-12B-Unslop-Mell-Q3_K_S.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q3_K_S.gguf) | Q3_K_S | 5.534 GB | very small, high quality loss |
| [patricide-12B-Unslop-Mell-Q3_K_M.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q3_K_M.gguf) | Q3_K_M | 6.083 GB | very small, high quality loss |
| [patricide-12B-Unslop-Mell-Q3_K_L.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q3_K_L.gguf) | Q3_K_L | 6.562 GB | small, substantial quality loss |
| [patricide-12B-Unslop-Mell-Q4_0.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q4_0.gguf) | Q4_0 | 7.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [patricide-12B-Unslop-Mell-Q4_K_S.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q4_K_S.gguf) | Q4_K_S | 7.120 GB | small, greater quality loss |
| [patricide-12B-Unslop-Mell-Q4_K_M.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q4_K_M.gguf) | Q4_K_M | 7.477 GB | medium, balanced quality - recommended |
| [patricide-12B-Unslop-Mell-Q5_0.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q5_0.gguf) | Q5_0 | 8.519 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [patricide-12B-Unslop-Mell-Q5_K_S.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q5_K_S.gguf) | Q5_K_S | 8.519 GB | large, low quality loss - recommended |
| [patricide-12B-Unslop-Mell-Q5_K_M.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q5_K_M.gguf) | Q5_K_M | 8.728 GB | large, very low quality loss - recommended |
| [patricide-12B-Unslop-Mell-Q6_K.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q6_K.gguf) | Q6_K | 10.056 GB | very large, extremely low quality loss |
| [patricide-12B-Unslop-Mell-Q8_0.gguf](https://huggingface.co/tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF/blob/main/patricide-12B-Unslop-Mell-Q8_0.gguf) | Q8_0 | 13.022 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF --include "patricide-12B-Unslop-Mell-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/redrix_patricide-12B-Unslop-Mell-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/osllmai-community_Llama-3.2-1B-GGUF | tensorblock | 2025-06-19T01:48:44Z | 26 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"osllmai",
"TensorBlock",
"GGUF",
"en",
"base_model:osllmai-community/Llama-3.2-1B",
"base_model:quantized:osllmai-community/Llama-3.2-1B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T08:58:51Z | ---
base_model: osllmai-community/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- osllmai
- transformers
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## osllmai-community/Llama-3.2-1B - GGUF
This repo contains GGUF format model files for [osllmai-community/Llama-3.2-1B](https://huggingface.co/osllmai-community/Llama-3.2-1B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3.2-1B-Q2_K.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q2_K.gguf) | Q2_K | 0.581 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3.2-1B-Q3_K_S.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q3_K_S.gguf) | Q3_K_S | 0.642 GB | very small, high quality loss |
| [Llama-3.2-1B-Q3_K_M.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q3_K_M.gguf) | Q3_K_M | 0.691 GB | very small, high quality loss |
| [Llama-3.2-1B-Q3_K_L.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q3_K_L.gguf) | Q3_K_L | 0.733 GB | small, substantial quality loss |
| [Llama-3.2-1B-Q4_0.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q4_0.gguf) | Q4_0 | 0.771 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3.2-1B-Q4_K_S.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q4_K_S.gguf) | Q4_K_S | 0.776 GB | small, greater quality loss |
| [Llama-3.2-1B-Q4_K_M.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q4_K_M.gguf) | Q4_K_M | 0.808 GB | medium, balanced quality - recommended |
| [Llama-3.2-1B-Q5_0.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q5_0.gguf) | Q5_0 | 0.893 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3.2-1B-Q5_K_S.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q5_K_S.gguf) | Q5_K_S | 0.893 GB | large, low quality loss - recommended |
| [Llama-3.2-1B-Q5_K_M.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q5_K_M.gguf) | Q5_K_M | 0.911 GB | large, very low quality loss - recommended |
| [Llama-3.2-1B-Q6_K.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q6_K.gguf) | Q6_K | 1.022 GB | very large, extremely low quality loss |
| [Llama-3.2-1B-Q8_0.gguf](https://huggingface.co/tensorblock/osllmai-community_Llama-3.2-1B-GGUF/blob/main/Llama-3.2-1B-Q8_0.gguf) | Q8_0 | 1.321 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/osllmai-community_Llama-3.2-1B-GGUF --include "Llama-3.2-1B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/osllmai-community_Llama-3.2-1B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Macropodus_macbert4csc_v2-GGUF | tensorblock | 2025-06-19T01:48:30Z | 65 | 0 | null | [
"gguf",
"csc",
"text-correct",
"chinses-spelling-correct",
"chinese-spelling-check",
"δΈζζΌεηΊ ι",
"macbert4csc",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"base_model:Macropodus/macbert4csc_v2",
"base_model:quantized:Macropodus/macbert4csc_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | text-generation | 2025-04-27T06:47:05Z | ---
license: apache-2.0
language:
- zh
base_model: Macropodus/macbert4csc_v2
pipeline_tag: text-generation
tags:
- csc
- text-correct
- chinses-spelling-correct
- chinese-spelling-check
- δΈζζΌεηΊ ι
- macbert4csc
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Macropodus/macbert4csc_v2 - GGUF
This repo contains GGUF format model files for [Macropodus/macbert4csc_v2](https://huggingface.co/Macropodus/macbert4csc_v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [macbert4csc_v2-Q2_K.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q2_K.gguf) | Q2_K | 0.048 GB | smallest, significant quality loss - not recommended for most purposes |
| [macbert4csc_v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q3_K_S.gguf) | Q3_K_S | 0.052 GB | very small, high quality loss |
| [macbert4csc_v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q3_K_M.gguf) | Q3_K_M | 0.058 GB | very small, high quality loss |
| [macbert4csc_v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q3_K_L.gguf) | Q3_K_L | 0.063 GB | small, substantial quality loss |
| [macbert4csc_v2-Q4_0.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q4_0.gguf) | Q4_0 | 0.064 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [macbert4csc_v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q4_K_S.gguf) | Q4_K_S | 0.064 GB | small, greater quality loss |
| [macbert4csc_v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q4_K_M.gguf) | Q4_K_M | 0.068 GB | medium, balanced quality - recommended |
| [macbert4csc_v2-Q5_0.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q5_0.gguf) | Q5_0 | 0.074 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [macbert4csc_v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q5_K_S.gguf) | Q5_K_S | 0.074 GB | large, low quality loss - recommended |
| [macbert4csc_v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q5_K_M.gguf) | Q5_K_M | 0.076 GB | large, very low quality loss - recommended |
| [macbert4csc_v2-Q6_K.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q6_K.gguf) | Q6_K | 0.085 GB | very large, extremely low quality loss |
| [macbert4csc_v2-Q8_0.gguf](https://huggingface.co/tensorblock/Macropodus_macbert4csc_v2-GGUF/blob/main/macbert4csc_v2-Q8_0.gguf) | Q8_0 | 0.110 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Macropodus_macbert4csc_v2-GGUF --include "macbert4csc_v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Macropodus_macbert4csc_v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF | tensorblock | 2025-06-19T01:48:15Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:GiKAGraphy/ProductLlama-8B-Instruct",
"base_model:quantized:GiKAGraphy/ProductLlama-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-27T05:03:29Z | ---
license: apache-2.0
language:
- en
base_model: GiKAGraphy/ProductLlama-8B-Instruct
pipeline_tag: text-generation
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## GiKAGraphy/ProductLlama-8B-Instruct - GGUF
This repo contains GGUF format model files for [GiKAGraphy/ProductLlama-8B-Instruct](https://huggingface.co/GiKAGraphy/ProductLlama-8B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ProductLlama-8B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [ProductLlama-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [ProductLlama-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [ProductLlama-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [ProductLlama-8B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ProductLlama-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [ProductLlama-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [ProductLlama-8B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ProductLlama-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [ProductLlama-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [ProductLlama-8B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [ProductLlama-8B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF/blob/main/ProductLlama-8B-Instruct-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF --include "ProductLlama-8B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/GiKAGraphy_ProductLlama-8B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF | tensorblock | 2025-06-19T01:47:56Z | 54 | 0 | transformers | [
"transformers",
"gguf",
"language",
"granite-3.3",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"region:us",
"conversational"
] | text-generation | 2025-04-26T23:27:14Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.3
- TensorBlock
- GGUF
base_model: ibm-granite/granite-3.3-2b-instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ibm-granite/granite-3.3-2b-instruct - GGUF
This repo contains GGUF format model files for [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|start_of_role|>system<|end_of_role|>{system_prompt}<|end_of_text|>
<|start_of_role|>user<|end_of_role|>{prompt}<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [granite-3.3-2b-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q2_K.gguf) | Q2_K | 0.978 GB | smallest, significant quality loss - not recommended for most purposes |
| [granite-3.3-2b-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q3_K_S.gguf) | Q3_K_S | 1.130 GB | very small, high quality loss |
| [granite-3.3-2b-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q3_K_M.gguf) | Q3_K_M | 1.252 GB | very small, high quality loss |
| [granite-3.3-2b-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q3_K_L.gguf) | Q3_K_L | 1.357 GB | small, substantial quality loss |
| [granite-3.3-2b-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q4_0.gguf) | Q4_0 | 1.453 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [granite-3.3-2b-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q4_K_S.gguf) | Q4_K_S | 1.464 GB | small, greater quality loss |
| [granite-3.3-2b-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q4_K_M.gguf) | Q4_K_M | 1.545 GB | medium, balanced quality - recommended |
| [granite-3.3-2b-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q5_0.gguf) | Q5_0 | 1.757 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [granite-3.3-2b-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q5_K_S.gguf) | Q5_K_S | 1.757 GB | large, low quality loss - recommended |
| [granite-3.3-2b-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q5_K_M.gguf) | Q5_K_M | 1.805 GB | large, very low quality loss - recommended |
| [granite-3.3-2b-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q6_K.gguf) | Q6_K | 2.081 GB | very large, extremely low quality loss |
| [granite-3.3-2b-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q8_0.gguf) | Q8_0 | 2.694 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF --include "granite-3.3-2b-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
visolex/phobert-emotion | visolex | 2025-06-19T01:47:44Z | 2 | 0 | null | [
"safetensors",
"roberta",
"emotion-recognition",
"vietnamese",
"phobert",
"text-classification",
"vi",
"dataset:VSMEC",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2025-06-16T03:54:06Z | ---
language: vi
tags:
- emotion-recognition
- vietnamese
- phobert
license: apache-2.0
datasets:
- VSMEC
metrics:
- accuracy
- f1
model-index:
- name: phobert-emotion
results:
- task:
type: text-classification
name: Emotion Recognition
dataset:
name: VSMEC
type: custom
metrics:
- name: Accuracy
type: accuracy
value: <INSERT_ACCURACY>
- name: F1 Score
type: f1
value: <INSERT_F1_SCORE>
base_model:
- vinai/phobert-base
pipeline_tag: text-classification
---
# PhoBERT-Emotion: Emotion Recognition for Vietnamese Text
This model is a fine-tuned version of [`vinai/phobert-base`](https://huggingface.co/vinai/phobert-base) on the **VSMEC** dataset for emotion recognition in Vietnamese text. It achieves competitive performance on this task.
## Model Details
- **Base Model**: [`vinai/phobert-base`](https://huggingface.co/vinai/phobert-base)
- **Dataset**: [VSMEC](https://github.com/uitnlp/vsmec) (Vietnamese Social Media Emotion Corpus)
- **Fine-tuning Framework**: HuggingFace Transformers
- **Hyperparameters**:
- Batch size: `32`
- Learning rate: `5e-5`
- Epochs: `100`
- Max sequence length: `256`
## Dataset
The model was trained on the **VSMEC** dataset, which contains Vietnamese social media text annotated with emotion labels. The dataset includes the following emotion categories:
`{"Anger": 0, "Disgust": 1, "Enjoyment": 2, "Fear": 3, "Other": 4, "Sadness": 5, "Surprise": 6}`.
## Results
The model was evaluated using the following metrics:
- **Accuracy**: `<INSERT_ACCURACY>`
- **F1 Score**: `<INSERT_F1_SCORE>`
## Usage
You can use this model for emotion recognition in Vietnamese text. Below is an example of how to use it with the HuggingFace Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("visolex/phobert-emotion")
model = AutoModelForSequenceClassification.from_pretrained("visolex/phobert-emotion")
text = "TΓ΄i rαΊ₯t vui vΓ¬ hΓ΄m nay trα»i ΔαΊΉp!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256)
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(dim=-1).item()
print(f"Predicted emotion: {predicted_class}") |
buttercoconut/Qwen2.5-ko-alpaca-0.5B-Q4 | buttercoconut | 2025-06-19T01:47:00Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ko",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2025-06-19T01:25:27Z | ---
license: apache-2.0
language:
- ko
base_model:
- Qwen/Qwen2.5-0.5B
pipeline_tag: text-generation
--- |
tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF | tensorblock | 2025-06-19T01:46:09Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am singing timid cassowary",
"trl",
"TensorBlock",
"GGUF",
"base_model:revonodes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary",
"base_model:quantized:revonodes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T07:41:16Z | ---
base_model: revonodes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am singing timid cassowary
- trl
- TensorBlock
- GGUF
licence: license
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## revonodes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary - GGUF
This repo contains GGUF format model files for [revonodes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary](https://huggingface.co/revonodes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q2_K.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q2_K.gguf) | Q2_K | 0.339 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q3_K_S.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q3_K_S.gguf) | Q3_K_S | 0.338 GB | very small, high quality loss |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q3_K_M.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q3_K_M.gguf) | Q3_K_M | 0.355 GB | very small, high quality loss |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q3_K_L.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q3_K_L.gguf) | Q3_K_L | 0.369 GB | small, substantial quality loss |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q4_0.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q4_0.gguf) | Q4_0 | 0.352 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q4_K_S.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q4_K_S.gguf) | Q4_K_S | 0.385 GB | small, greater quality loss |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q4_K_M.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q4_K_M.gguf) | Q4_K_M | 0.398 GB | medium, balanced quality - recommended |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q5_0.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q5_0.gguf) | Q5_0 | 0.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q5_K_S.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q5_K_S.gguf) | Q5_K_S | 0.413 GB | large, low quality loss - recommended |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q5_K_M.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q5_K_M.gguf) | Q5_K_M | 0.420 GB | large, very low quality loss - recommended |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q6_K.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q6_K.gguf) | Q6_K | 0.506 GB | very large, extremely low quality loss |
| [Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q8_0.gguf](https://huggingface.co/tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF/blob/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q8_0.gguf) | Q8_0 | 0.531 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF --include "Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/revonodes_Qwen2.5-0.5B-Instruct-Gensyn-Swarm-singing_timid_cassowary-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/hfl_chinese-llama-2-13b-GGUF | tensorblock | 2025-06-19T01:44:59Z | 84 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"zh",
"en",
"base_model:hfl/chinese-llama-2-13b",
"base_model:quantized:hfl/chinese-llama-2-13b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T20:36:29Z | ---
license: apache-2.0
language:
- zh
- en
tags:
- TensorBlock
- GGUF
base_model: hfl/chinese-llama-2-13b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## hfl/chinese-llama-2-13b - GGUF
This repo contains GGUF format model files for [hfl/chinese-llama-2-13b](https://huggingface.co/hfl/chinese-llama-2-13b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [chinese-llama-2-13b-Q2_K.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q2_K.gguf) | Q2_K | 4.992 GB | smallest, significant quality loss - not recommended for most purposes |
| [chinese-llama-2-13b-Q3_K_S.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q3_K_S.gguf) | Q3_K_S | 5.809 GB | very small, high quality loss |
| [chinese-llama-2-13b-Q3_K_M.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q3_K_M.gguf) | Q3_K_M | 6.487 GB | very small, high quality loss |
| [chinese-llama-2-13b-Q3_K_L.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q3_K_L.gguf) | Q3_K_L | 7.079 GB | small, substantial quality loss |
| [chinese-llama-2-13b-Q4_0.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q4_0.gguf) | Q4_0 | 7.531 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chinese-llama-2-13b-Q4_K_S.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q4_K_S.gguf) | Q4_K_S | 7.589 GB | small, greater quality loss |
| [chinese-llama-2-13b-Q4_K_M.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q4_K_M.gguf) | Q4_K_M | 8.031 GB | medium, balanced quality - recommended |
| [chinese-llama-2-13b-Q5_0.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q5_0.gguf) | Q5_0 | 9.153 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chinese-llama-2-13b-Q5_K_S.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q5_K_S.gguf) | Q5_K_S | 9.153 GB | large, low quality loss - recommended |
| [chinese-llama-2-13b-Q5_K_M.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q5_K_M.gguf) | Q5_K_M | 9.410 GB | large, very low quality loss - recommended |
| [chinese-llama-2-13b-Q6_K.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q6_K.gguf) | Q6_K | 10.875 GB | very large, extremely low quality loss |
| [chinese-llama-2-13b-Q8_0.gguf](https://huggingface.co/tensorblock/hfl_chinese-llama-2-13b-GGUF/blob/main/chinese-llama-2-13b-Q8_0.gguf) | Q8_0 | 14.085 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/hfl_chinese-llama-2-13b-GGUF --include "chinese-llama-2-13b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/hfl_chinese-llama-2-13b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF | tensorblock | 2025-06-19T01:44:51Z | 114 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:ethicalabs/Kurtis-E1-SFT",
"base_model:ethicalabs/Kurtis-E1.1-Qwen2.5-3B-Instruct",
"base_model:quantized:ethicalabs/Kurtis-E1.1-Qwen2.5-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-25T19:23:16Z | ---
library_name: transformers
license: mit
datasets:
- ethicalabs/Kurtis-E1-SFT
language:
- en
base_model: ethicalabs/Kurtis-E1.1-Qwen2.5-3B-Instruct
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ethicalabs/Kurtis-E1.1-Qwen2.5-3B-Instruct - GGUF
This repo contains GGUF format model files for [ethicalabs/Kurtis-E1.1-Qwen2.5-3B-Instruct](https://huggingface.co/ethicalabs/Kurtis-E1.1-Qwen2.5-3B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q2_K.gguf) | Q2_K | 1.275 GB | smallest, significant quality loss - not recommended for most purposes |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q3_K_S.gguf) | Q3_K_S | 1.454 GB | very small, high quality loss |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q3_K_M.gguf) | Q3_K_M | 1.590 GB | very small, high quality loss |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q3_K_L.gguf) | Q3_K_L | 1.707 GB | small, substantial quality loss |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q4_0.gguf) | Q4_0 | 1.823 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q4_K_S.gguf) | Q4_K_S | 1.834 GB | small, greater quality loss |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q4_K_M.gguf) | Q4_K_M | 1.930 GB | medium, balanced quality - recommended |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q5_0.gguf) | Q5_0 | 2.170 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q5_K_S.gguf) | Q5_K_S | 2.170 GB | large, low quality loss - recommended |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q5_K_M.gguf) | Q5_K_M | 2.225 GB | large, very low quality loss - recommended |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q6_K.gguf) | Q6_K | 2.538 GB | very large, extremely low quality loss |
| [Kurtis-E1.1-Qwen2.5-3B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF/blob/main/Kurtis-E1.1-Qwen2.5-3B-Instruct-Q8_0.gguf) | Q8_0 | 3.285 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF --include "Kurtis-E1.1-Qwen2.5-3B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ethicalabs_Kurtis-E1.1-Qwen2.5-3B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/YeungNLP_firefly-ziya-13b-GGUF | tensorblock | 2025-06-19T01:44:31Z | 16 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:YeungNLP/firefly-ziya-13b",
"base_model:quantized:YeungNLP/firefly-ziya-13b",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T19:08:08Z | ---
base_model: YeungNLP/firefly-ziya-13b
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## YeungNLP/firefly-ziya-13b - GGUF
This repo contains GGUF format model files for [YeungNLP/firefly-ziya-13b](https://huggingface.co/YeungNLP/firefly-ziya-13b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [firefly-ziya-13b-Q2_K.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q2_K.gguf) | Q2_K | 4.898 GB | smallest, significant quality loss - not recommended for most purposes |
| [firefly-ziya-13b-Q3_K_S.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q3_K_S.gguf) | Q3_K_S | 5.707 GB | very small, high quality loss |
| [firefly-ziya-13b-Q3_K_M.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q3_K_M.gguf) | Q3_K_M | 6.385 GB | very small, high quality loss |
| [firefly-ziya-13b-Q3_K_L.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q3_K_L.gguf) | Q3_K_L | 6.977 GB | small, substantial quality loss |
| [firefly-ziya-13b-Q4_0.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q4_0.gguf) | Q4_0 | 7.419 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [firefly-ziya-13b-Q4_K_S.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q4_K_S.gguf) | Q4_K_S | 7.476 GB | small, greater quality loss |
| [firefly-ziya-13b-Q4_K_M.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q4_K_M.gguf) | Q4_K_M | 7.919 GB | medium, balanced quality - recommended |
| [firefly-ziya-13b-Q5_0.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q5_0.gguf) | Q5_0 | 9.030 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [firefly-ziya-13b-Q5_K_S.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q5_K_S.gguf) | Q5_K_S | 9.030 GB | large, low quality loss - recommended |
| [firefly-ziya-13b-Q5_K_M.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q5_K_M.gguf) | Q5_K_M | 9.287 GB | large, very low quality loss - recommended |
| [firefly-ziya-13b-Q6_K.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q6_K.gguf) | Q6_K | 10.742 GB | very large, extremely low quality loss |
| [firefly-ziya-13b-Q8_0.gguf](https://huggingface.co/tensorblock/YeungNLP_firefly-ziya-13b-GGUF/blob/main/firefly-ziya-13b-Q8_0.gguf) | Q8_0 | 13.912 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/YeungNLP_firefly-ziya-13b-GGUF --include "firefly-ziya-13b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/YeungNLP_firefly-ziya-13b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Xenova_llama2.c-stories42M-GGUF | tensorblock | 2025-06-19T01:43:36Z | 94 | 0 | transformers.js | [
"transformers.js",
"gguf",
"transformers",
"TensorBlock",
"GGUF",
"base_model:Xenova/llama2.c-stories42M",
"base_model:quantized:Xenova/llama2.c-stories42M",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T17:04:36Z | ---
library_name: transformers.js
tags:
- transformers
- TensorBlock
- GGUF
base_model: Xenova/llama2.c-stories42M
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Xenova/llama2.c-stories42M - GGUF
This repo contains GGUF format model files for [Xenova/llama2.c-stories42M](https://huggingface.co/Xenova/llama2.c-stories42M).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama2.c-stories42M-Q2_K.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q2_K.gguf) | Q2_K | 0.030 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2.c-stories42M-Q3_K_S.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q3_K_S.gguf) | Q3_K_S | 0.033 GB | very small, high quality loss |
| [llama2.c-stories42M-Q3_K_M.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q3_K_M.gguf) | Q3_K_M | 0.034 GB | very small, high quality loss |
| [llama2.c-stories42M-Q3_K_L.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q3_K_L.gguf) | Q3_K_L | 0.035 GB | small, substantial quality loss |
| [llama2.c-stories42M-Q4_0.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q4_0.gguf) | Q4_0 | 0.038 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2.c-stories42M-Q4_K_S.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q4_K_S.gguf) | Q4_K_S | 0.039 GB | small, greater quality loss |
| [llama2.c-stories42M-Q4_K_M.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q4_K_M.gguf) | Q4_K_M | 0.040 GB | medium, balanced quality - recommended |
| [llama2.c-stories42M-Q5_0.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q5_0.gguf) | Q5_0 | 0.043 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2.c-stories42M-Q5_K_S.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q5_K_S.gguf) | Q5_K_S | 0.043 GB | large, low quality loss - recommended |
| [llama2.c-stories42M-Q5_K_M.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q5_K_M.gguf) | Q5_K_M | 0.044 GB | large, very low quality loss - recommended |
| [llama2.c-stories42M-Q6_K.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q6_K.gguf) | Q6_K | 0.050 GB | very large, extremely low quality loss |
| [llama2.c-stories42M-Q8_0.gguf](https://huggingface.co/tensorblock/Xenova_llama2.c-stories42M-GGUF/blob/main/llama2.c-stories42M-Q8_0.gguf) | Q8_0 | 0.062 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Xenova_llama2.c-stories42M-GGUF --include "llama2.c-stories42M-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Xenova_llama2.c-stories42M-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF | tensorblock | 2025-06-19T01:43:33Z | 59 | 1 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16",
"base_model:quantized:TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16",
"license:other",
"region:us"
] | null | 2025-04-25T13:24:59Z | ---
inference: false
license: other
tags:
- TensorBlock
- GGUF
base_model: TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16 - GGUF
This repo contains GGUF format model files for [TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q2_K.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q2_K.gguf) | Q2_K | 12.049 GB | smallest, significant quality loss - not recommended for most purposes |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q3_K_S.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q3_K_S.gguf) | Q3_K_S | 14.064 GB | very small, high quality loss |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q3_K_M.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q3_K_M.gguf) | Q3_K_M | 15.776 GB | very small, high quality loss |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q3_K_L.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q3_K_L.gguf) | Q3_K_L | 17.280 GB | small, substantial quality loss |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q4_0.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q4_0.gguf) | Q4_0 | 18.356 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q4_K_S.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q4_K_S.gguf) | Q4_K_S | 18.482 GB | small, greater quality loss |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q4_K_M.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q4_K_M.gguf) | Q4_K_M | 19.621 GB | medium, balanced quality - recommended |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q5_0.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q5_0.gguf) | Q5_0 | 22.395 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q5_K_S.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q5_K_S.gguf) | Q5_K_S | 22.395 GB | large, low quality loss - recommended |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q5_K_M.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q5_K_M.gguf) | Q5_K_M | 23.047 GB | large, very low quality loss - recommended |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q6_K.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q6_K.gguf) | Q6_K | 26.687 GB | very large, extremely low quality loss |
| [Vicuna-33B-1-3-SuperHOT-8K-fp16-Q8_0.gguf](https://huggingface.co/tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF/blob/main/Vicuna-33B-1-3-SuperHOT-8K-fp16-Q8_0.gguf) | Q8_0 | 34.565 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF --include "Vicuna-33B-1-3-SuperHOT-8K-fp16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TheBloke_Vicuna-33B-1-3-SuperHOT-8K-fp16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF | tensorblock | 2025-06-19T01:43:26Z | 40 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"base_model:WDKT/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B",
"base_model:quantized:WDKT/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-25T12:55:36Z | ---
license: llama3
language:
- zh
- en
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: WDKT/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## WDKT/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B - GGUF
This repo contains GGUF format model files for [WDKT/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B](https://huggingface.co/WDKT/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q2_K.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q2_K.gguf) | Q2_K | 26.375 GB | smallest, significant quality loss - not recommended for most purposes |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q3_K_S.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q3_K_S.gguf) | Q3_K_S | 30.912 GB | very small, high quality loss |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q3_K_M.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q3_K_M.gguf) | Q3_K_M | 34.267 GB | very small, high quality loss |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q3_K_L.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q3_K_L.gguf) | Q3_K_L | 37.141 GB | small, substantial quality loss |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q4_0.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q4_0.gguf) | Q4_0 | 39.970 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q4_K_S.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q4_K_S.gguf) | Q4_K_S | 40.347 GB | small, greater quality loss |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q4_K_M.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q4_K_M.gguf) | Q4_K_M | 42.520 GB | medium, balanced quality - recommended |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q5_0.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q5_0.gguf) | Q5_0 | 48.657 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q5_K_S.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q5_K_S.gguf) | Q5_K_S | 48.657 GB | large, low quality loss - recommended |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q5_K_M.gguf](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q5_K_M.gguf) | Q5_K_M | 49.950 GB | large, very low quality loss - recommended |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q6_K](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q6_K) | Q6_K | 57.888 GB | very large, extremely low quality loss |
| [Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q8_0](https://huggingface.co/tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF/blob/main/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q8_0) | Q8_0 | 74.975 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF --include "Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/WDKT_Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
visolex/bartpho-emotion | visolex | 2025-06-19T01:43:05Z | 1 | 0 | null | [
"safetensors",
"mbart",
"emotion-recognition",
"vietnamese",
"bartpho",
"text-classification",
"vi",
"dataset:VSMEC",
"base_model:vinai/bartpho-syllable",
"base_model:finetune:vinai/bartpho-syllable",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2025-06-16T15:07:38Z | ---
language: vi
tags:
- emotion-recognition
- vietnamese
- bartpho
license: apache-2.0
datasets:
- VSMEC
metrics:
- accuracy
- f1
model-index:
- name: bartpho-emotion
results:
- task:
type: text-classification
name: Emotion Recognition
dataset:
name: VSMEC
type: custom
metrics:
- name: Accuracy
type: accuracy
value: <INSERT_ACCURACY>
- name: F1 Score
type: f1
value: <INSERT_F1_SCORE>
base_model:
- vinai/bartpho-syllable
pipeline_tag: text-classification
---
# bartpho-emotion: Emotion Recognition for Vietnamese Text
This model is a fine-tuned version of [`vinai/bartpho-syllable`](https://huggingface.co/vinai/bartpho-syllable) on the **VSMEC** dataset for emotion recognition in Vietnamese text. It achieves state-of-the-art performance on this task.
## Model Details
- **Base Model**: [`vinai/bartpho-syllable`](https://huggingface.co/vinai/bartpho-syllable)
- **Dataset**: [VSMEC](https://github.com/uitnlp/vsmec) (Vietnamese Social Media Emotion Corpus)
- **Fine-tuning Framework**: HuggingFace Transformers
- **Hyperparameters**:
- Batch size: `32`
- Learning rate: `5e-5`
- Epochs: `100`
- Max sequence length: `256`
## Dataset
The model was trained on the **VSMEC** dataset, which contains Vietnamese social media text annotated with emotion labels. The dataset includes the following emotion categories:
`{"Anger": 0, "Disgust": 1, "Enjoyment": 2, "Fear": 3, "Other": 4, "Sadness": 5, "Surprise": 6}`.
## Results
The model was evaluated using the following metrics:
- **Accuracy**: `<INSERT_ACCURACY>`
- **F1 Score**: `<INSERT_F1_SCORE>`
## Usage
You can use this model for emotion recognition in Vietnamese text. Below is an example of how to use it with the HuggingFace Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("visolex/bartpho-emotion")
model = AutoModelForSequenceClassification.from_pretrained("visolex/bartpho-emotion")
text = "TΓ΄i rαΊ₯t vui vΓ¬ hΓ΄m nay trα»i ΔαΊΉp!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256)
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(dim=-1).item()
print(f"Predicted emotion: {predicted_class}") |
tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF | tensorblock | 2025-06-19T01:42:31Z | 59 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"dataset:DeepMount00/Sonnet-3.5-ITA-INSTRUCTION",
"dataset:DeepMount00/Sonnet-3.5-ITA-DPO",
"base_model:DeepMount00/Lexora-Lite-3B_v2",
"base_model:quantized:DeepMount00/Lexora-Lite-3B_v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T23:09:03Z | ---
library_name: transformers
datasets:
- DeepMount00/Sonnet-3.5-ITA-INSTRUCTION
- DeepMount00/Sonnet-3.5-ITA-DPO
tags:
- TensorBlock
- GGUF
base_model: DeepMount00/Lexora-Lite-3B_v2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## DeepMount00/Lexora-Lite-3B_v2 - GGUF
This repo contains GGUF format model files for [DeepMount00/Lexora-Lite-3B_v2](https://huggingface.co/DeepMount00/Lexora-Lite-3B_v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Lexora-Lite-3B_v2-Q2_K.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q2_K.gguf) | Q2_K | 1.275 GB | smallest, significant quality loss - not recommended for most purposes |
| [Lexora-Lite-3B_v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q3_K_S.gguf) | Q3_K_S | 1.454 GB | very small, high quality loss |
| [Lexora-Lite-3B_v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q3_K_M.gguf) | Q3_K_M | 1.590 GB | very small, high quality loss |
| [Lexora-Lite-3B_v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q3_K_L.gguf) | Q3_K_L | 1.707 GB | small, substantial quality loss |
| [Lexora-Lite-3B_v2-Q4_0.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q4_0.gguf) | Q4_0 | 1.823 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Lexora-Lite-3B_v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q4_K_S.gguf) | Q4_K_S | 1.834 GB | small, greater quality loss |
| [Lexora-Lite-3B_v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q4_K_M.gguf) | Q4_K_M | 1.930 GB | medium, balanced quality - recommended |
| [Lexora-Lite-3B_v2-Q5_0.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q5_0.gguf) | Q5_0 | 2.170 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Lexora-Lite-3B_v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q5_K_S.gguf) | Q5_K_S | 2.170 GB | large, low quality loss - recommended |
| [Lexora-Lite-3B_v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q5_K_M.gguf) | Q5_K_M | 2.225 GB | large, very low quality loss - recommended |
| [Lexora-Lite-3B_v2-Q6_K.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q6_K.gguf) | Q6_K | 2.538 GB | very large, extremely low quality loss |
| [Lexora-Lite-3B_v2-Q8_0.gguf](https://huggingface.co/tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF/blob/main/Lexora-Lite-3B_v2-Q8_0.gguf) | Q8_0 | 3.285 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF --include "Lexora-Lite-3B_v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/DeepMount00_Lexora-Lite-3B_v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF | tensorblock | 2025-06-19T01:42:05Z | 3 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:MNCJihunKim/Mistral-7B-SlimOrca-OP-8k",
"base_model:quantized:MNCJihunKim/Mistral-7B-SlimOrca-OP-8k",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T13:35:05Z | ---
base_model: MNCJihunKim/Mistral-7B-SlimOrca-OP-8k
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MNCJihunKim/Mistral-7B-SlimOrca-OP-8k - GGUF
This repo contains GGUF format model files for [MNCJihunKim/Mistral-7B-SlimOrca-OP-8k](https://huggingface.co/MNCJihunKim/Mistral-7B-SlimOrca-OP-8k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-SlimOrca-OP-8k-Q2_K.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-SlimOrca-OP-8k-Q3_K_S.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-OP-8k-Q3_K_M.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-OP-8k-Q3_K_L.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-SlimOrca-OP-8k-Q4_0.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-SlimOrca-OP-8k-Q4_K_S.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-SlimOrca-OP-8k-Q4_K_M.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-SlimOrca-OP-8k-Q5_0.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-SlimOrca-OP-8k-Q5_K_S.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-SlimOrca-OP-8k-Q5_K_M.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-SlimOrca-OP-8k-Q6_K.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-SlimOrca-OP-8k-Q8_0.gguf](https://huggingface.co/tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-8k-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF --include "Mistral-7B-SlimOrca-OP-8k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MNCJihunKim_Mistral-7B-SlimOrca-OP-8k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF | tensorblock | 2025-06-19T01:41:56Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:mshen2/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up",
"base_model:quantized:mshen2/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T09:01:00Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: mshen2/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mshen2/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up - GGUF
This repo contains GGUF format model files for [mshen2/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up](https://huggingface.co/mshen2/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q2_K.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q3_K_S.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q3_K_M.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q3_K_L.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q4_0.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q4_K_S.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q4_K_M.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q5_0.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q5_K_S.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q5_K_M.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q6_K.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q8_0.gguf](https://huggingface.co/tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF/blob/main/qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF --include "qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mshen2_qwen2.5-7b-v4-short-wrapNW-nextWord-em-up-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF | tensorblock | 2025-06-19T01:41:44Z | 40 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"dataset:kyujinpy/KOR-gugugu-platypus-set",
"base_model:PracticeLLM/Custom-KoLLM-13B-v5",
"base_model:quantized:PracticeLLM/Custom-KoLLM-13B-v5",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T04:58:08Z | ---
language:
- ko
datasets:
- kyujinpy/KOR-gugugu-platypus-set
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
tags:
- TensorBlock
- GGUF
base_model: PracticeLLM/Custom-KoLLM-13B-v5
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## PracticeLLM/Custom-KoLLM-13B-v5 - GGUF
This repo contains GGUF format model files for [PracticeLLM/Custom-KoLLM-13B-v5](https://huggingface.co/PracticeLLM/Custom-KoLLM-13B-v5).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Custom-KoLLM-13B-v5-Q2_K.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q2_K.gguf) | Q2_K | 4.939 GB | smallest, significant quality loss - not recommended for most purposes |
| [Custom-KoLLM-13B-v5-Q3_K_S.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q3_K_S.gguf) | Q3_K_S | 5.751 GB | very small, high quality loss |
| [Custom-KoLLM-13B-v5-Q3_K_M.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q3_K_M.gguf) | Q3_K_M | 6.430 GB | very small, high quality loss |
| [Custom-KoLLM-13B-v5-Q3_K_L.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q3_K_L.gguf) | Q3_K_L | 7.022 GB | small, substantial quality loss |
| [Custom-KoLLM-13B-v5-Q4_0.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q4_0.gguf) | Q4_0 | 7.468 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Custom-KoLLM-13B-v5-Q4_K_S.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q4_K_S.gguf) | Q4_K_S | 7.525 GB | small, greater quality loss |
| [Custom-KoLLM-13B-v5-Q4_K_M.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q4_K_M.gguf) | Q4_K_M | 7.968 GB | medium, balanced quality - recommended |
| [Custom-KoLLM-13B-v5-Q5_0.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q5_0.gguf) | Q5_0 | 9.083 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Custom-KoLLM-13B-v5-Q5_K_S.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q5_K_S.gguf) | Q5_K_S | 9.083 GB | large, low quality loss - recommended |
| [Custom-KoLLM-13B-v5-Q5_K_M.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q5_K_M.gguf) | Q5_K_M | 9.341 GB | large, very low quality loss - recommended |
| [Custom-KoLLM-13B-v5-Q6_K.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q6_K.gguf) | Q6_K | 10.800 GB | very large, extremely low quality loss |
| [Custom-KoLLM-13B-v5-Q8_0.gguf](https://huggingface.co/tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF/blob/main/Custom-KoLLM-13B-v5-Q8_0.gguf) | Q8_0 | 13.988 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF --include "Custom-KoLLM-13B-v5-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/PracticeLLM_Custom-KoLLM-13B-v5-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF | tensorblock | 2025-06-19T01:41:39Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:ALIN-LLM/finetune-llama-3.2-1b-mbpp",
"base_model:quantized:ALIN-LLM/finetune-llama-3.2-1b-mbpp",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T04:43:52Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: ALIN-LLM/finetune-llama-3.2-1b-mbpp
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ALIN-LLM/finetune-llama-3.2-1b-mbpp - GGUF
This repo contains GGUF format model files for [ALIN-LLM/finetune-llama-3.2-1b-mbpp](https://huggingface.co/ALIN-LLM/finetune-llama-3.2-1b-mbpp).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}
<|user|>
{prompt}
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [finetune-llama-3.2-1b-mbpp-Q2_K.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q2_K.gguf) | Q2_K | 0.581 GB | smallest, significant quality loss - not recommended for most purposes |
| [finetune-llama-3.2-1b-mbpp-Q3_K_S.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q3_K_S.gguf) | Q3_K_S | 0.642 GB | very small, high quality loss |
| [finetune-llama-3.2-1b-mbpp-Q3_K_M.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q3_K_M.gguf) | Q3_K_M | 0.691 GB | very small, high quality loss |
| [finetune-llama-3.2-1b-mbpp-Q3_K_L.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q3_K_L.gguf) | Q3_K_L | 0.733 GB | small, substantial quality loss |
| [finetune-llama-3.2-1b-mbpp-Q4_0.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q4_0.gguf) | Q4_0 | 0.771 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [finetune-llama-3.2-1b-mbpp-Q4_K_S.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q4_K_S.gguf) | Q4_K_S | 0.776 GB | small, greater quality loss |
| [finetune-llama-3.2-1b-mbpp-Q4_K_M.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q4_K_M.gguf) | Q4_K_M | 0.808 GB | medium, balanced quality - recommended |
| [finetune-llama-3.2-1b-mbpp-Q5_0.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q5_0.gguf) | Q5_0 | 0.893 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [finetune-llama-3.2-1b-mbpp-Q5_K_S.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q5_K_S.gguf) | Q5_K_S | 0.893 GB | large, low quality loss - recommended |
| [finetune-llama-3.2-1b-mbpp-Q5_K_M.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q5_K_M.gguf) | Q5_K_M | 0.911 GB | large, very low quality loss - recommended |
| [finetune-llama-3.2-1b-mbpp-Q6_K.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q6_K.gguf) | Q6_K | 1.022 GB | very large, extremely low quality loss |
| [finetune-llama-3.2-1b-mbpp-Q8_0.gguf](https://huggingface.co/tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF/blob/main/finetune-llama-3.2-1b-mbpp-Q8_0.gguf) | Q8_0 | 1.321 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF --include "finetune-llama-3.2-1b-mbpp-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ALIN-LLM_finetune-llama-3.2-1b-mbpp-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF | tensorblock | 2025-06-19T01:41:35Z | 5 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"es",
"dataset:Danielbrdz/Barcenas-lmsys-Dataset",
"base_model:Danielbrdz/Barcenas-Mistral-7b",
"base_model:quantized:Danielbrdz/Barcenas-Mistral-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T03:16:33Z | ---
license: apache-2.0
datasets:
- Danielbrdz/Barcenas-lmsys-Dataset
language:
- en
- es
tags:
- TensorBlock
- GGUF
base_model: Danielbrdz/Barcenas-Mistral-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Danielbrdz/Barcenas-Mistral-7b - GGUF
This repo contains GGUF format model files for [Danielbrdz/Barcenas-Mistral-7b](https://huggingface.co/Danielbrdz/Barcenas-Mistral-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Barcenas-Mistral-7b-Q2_K.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Barcenas-Mistral-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Barcenas-Mistral-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Barcenas-Mistral-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Barcenas-Mistral-7b-Q4_0.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Barcenas-Mistral-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Barcenas-Mistral-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Barcenas-Mistral-7b-Q5_0.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Barcenas-Mistral-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Barcenas-Mistral-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Barcenas-Mistral-7b-Q6_K.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Barcenas-Mistral-7b-Q8_0.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF --include "Barcenas-Mistral-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF | tensorblock | 2025-06-19T01:41:33Z | 1,233 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:mlxha/DeepSeek-R1-Distill-Llama-8B-notemplate",
"base_model:quantized:mlxha/DeepSeek-R1-Distill-Llama-8B-notemplate",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T03:15:12Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: mlxha/DeepSeek-R1-Distill-Llama-8B-notemplate
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlxha/DeepSeek-R1-Distill-Llama-8B-notemplate - GGUF
This repo contains GGUF format model files for [mlxha/DeepSeek-R1-Distill-Llama-8B-notemplate](https://huggingface.co/mlxha/DeepSeek-R1-Distill-Llama-8B-notemplate).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<ο½beginβofβsentenceο½>{system_prompt}
{prompt}
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q2_K.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q4_0.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q5_0.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q6_K.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [DeepSeek-R1-Distill-Llama-8B-notemplate-Q8_0.gguf](https://huggingface.co/tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF/blob/main/DeepSeek-R1-Distill-Llama-8B-notemplate-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF --include "DeepSeek-R1-Distill-Llama-8B-notemplate-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlxha_DeepSeek-R1-Distill-Llama-8B-notemplate-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF | tensorblock | 2025-06-19T01:41:25Z | 218 | 0 | transformers | [
"transformers",
"gguf",
"cybersecurity",
"pretraining",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:trendmicro-ailab/Primus-Reasoning",
"dataset:trendmicro-ailab/Primus-Seed",
"dataset:trendmicro-ailab/Primus-FineWeb",
"dataset:trendmicro-ailab/Primus-Instruct",
"base_model:trendmicro-ailab/Llama-Primus-Reasoning",
"base_model:quantized:trendmicro-ailab/Llama-Primus-Reasoning",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-23T01:10:39Z | ---
license: mit
datasets:
- trendmicro-ailab/Primus-Reasoning
- trendmicro-ailab/Primus-Seed
- trendmicro-ailab/Primus-FineWeb
- trendmicro-ailab/Primus-Instruct
language:
- en
base_model: trendmicro-ailab/Llama-Primus-Reasoning
pipeline_tag: text-generation
library_name: transformers
tags:
- cybersecurity
- pretraining
- TensorBlock
- GGUF
extra_gated_fields:
Affiliation: text
Country: country
I want to use this model for:
type: select
options:
- Research
- Commercial
- label: Other
value: other
Job title:
type: select
options:
- Student
- Research graduate
- AI researcher
- AI developer/engineer
- Cybersecurity researcher
- Reporter
- Other
geo: ip_location
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## trendmicro-ailab/Llama-Primus-Reasoning - GGUF
This repo contains GGUF format model files for [trendmicro-ailab/Llama-Primus-Reasoning](https://huggingface.co/trendmicro-ailab/Llama-Primus-Reasoning).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-Primus-Reasoning-Q2_K.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-Primus-Reasoning-Q3_K_S.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [Llama-Primus-Reasoning-Q3_K_M.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-Primus-Reasoning-Q3_K_L.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-Primus-Reasoning-Q4_0.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-Primus-Reasoning-Q4_K_S.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-Primus-Reasoning-Q4_K_M.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-Primus-Reasoning-Q5_0.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-Primus-Reasoning-Q5_K_S.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-Primus-Reasoning-Q5_K_M.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-Primus-Reasoning-Q6_K.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-Primus-Reasoning-Q8_0.gguf](https://huggingface.co/tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF/blob/main/Llama-Primus-Reasoning-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF --include "Llama-Primus-Reasoning-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/trendmicro-ailab_Llama-Primus-Reasoning-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
DS4H-ICTU/linguo_mt_fub_en | DS4H-ICTU | 2025-06-19T01:40:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ROMANCE",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ROMANCE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-06-19T01:40:07Z | ---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ROMANCE
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: linguo_mt_fub_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# linguo_mt_fub_en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6569
- Bleu: 11.2552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8906 | 1.0 | 1534 | 0.7769 | 7.7862 |
| 0.7049 | 2.0 | 3068 | 0.6852 | 9.9392 |
| 0.6793 | 3.0 | 4602 | 0.6569 | 11.2552 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
tensorblock/s1k-GGUF | tensorblock | 2025-06-19T01:40:14Z | 148 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:TianshengHuang/s1k",
"base_model:quantized:TianshengHuang/s1k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-23T15:14:48Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: TianshengHuang/s1k
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## TianshengHuang/s1k - GGUF
This repo contains GGUF format model files for [TianshengHuang/s1k](https://huggingface.co/TianshengHuang/s1k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [s1k-Q2_K.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q2_K.gguf) | Q2_K | 12.313 GB | smallest, significant quality loss - not recommended for most purposes |
| [s1k-Q3_K_S.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q3_K_S.gguf) | Q3_K_S | 14.392 GB | very small, high quality loss |
| [s1k-Q3_K_M.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q3_K_M.gguf) | Q3_K_M | 15.935 GB | very small, high quality loss |
| [s1k-Q3_K_L.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q3_K_L.gguf) | Q3_K_L | 17.247 GB | small, substantial quality loss |
| [s1k-Q4_0.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q4_0.gguf) | Q4_0 | 18.640 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [s1k-Q4_K_S.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q4_K_S.gguf) | Q4_K_S | 18.784 GB | small, greater quality loss |
| [s1k-Q4_K_M.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q4_K_M.gguf) | Q4_K_M | 19.851 GB | medium, balanced quality - recommended |
| [s1k-Q5_0.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q5_0.gguf) | Q5_0 | 22.638 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [s1k-Q5_K_S.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q5_K_S.gguf) | Q5_K_S | 22.638 GB | large, low quality loss - recommended |
| [s1k-Q5_K_M.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q5_K_M.gguf) | Q5_K_M | 23.262 GB | large, very low quality loss - recommended |
| [s1k-Q6_K.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q6_K.gguf) | Q6_K | 26.886 GB | very large, extremely low quality loss |
| [s1k-Q8_0.gguf](https://huggingface.co/tensorblock/s1k-GGUF/blob/main/s1k-Q8_0.gguf) | Q8_0 | 34.821 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/s1k-GGUF --include "s1k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/s1k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF | tensorblock | 2025-06-19T01:39:39Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:NAM00/context_tuned_patient_matching_Llama-3.2-1B-Instruct",
"base_model:quantized:NAM00/context_tuned_patient_matching_Llama-3.2-1B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-23T08:29:59Z | ---
library_name: transformers
license: llama3.2
base_model: NAM00/context_tuned_patient_matching_Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: context_tuned_patient_matching_Llama-3.2-1B-Instruct
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## NAM00/context_tuned_patient_matching_Llama-3.2-1B-Instruct - GGUF
This repo contains GGUF format model files for [NAM00/context_tuned_patient_matching_Llama-3.2-1B-Instruct](https://huggingface.co/NAM00/context_tuned_patient_matching_Llama-3.2-1B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 23 Mar 2025
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q2_K.gguf) | Q2_K | 0.581 GB | smallest, significant quality loss - not recommended for most purposes |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q3_K_S.gguf) | Q3_K_S | 0.642 GB | very small, high quality loss |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q3_K_M.gguf) | Q3_K_M | 0.691 GB | very small, high quality loss |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.733 GB | small, substantial quality loss |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q4_0.gguf) | Q4_0 | 0.771 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.776 GB | small, greater quality loss |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.808 GB | medium, balanced quality - recommended |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q5_0.gguf) | Q5_0 | 0.893 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q5_K_S.gguf) | Q5_K_S | 0.893 GB | large, low quality loss - recommended |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q5_K_M.gguf) | Q5_K_M | 0.912 GB | large, very low quality loss - recommended |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q6_K.gguf) | Q6_K | 1.022 GB | very large, extremely low quality loss |
| [context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF/blob/main/context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q8_0.gguf) | Q8_0 | 1.321 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF --include "context_tuned_patient_matching_Llama-3.2-1B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/context_tuned_patient_matching_Llama-3.2-1B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/gemma-2-2b-neogenesis-ita-GGUF | tensorblock | 2025-06-19T01:39:12Z | 250 | 1 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"it",
"en",
"dataset:efederici/capybara-claude-15k-ita",
"dataset:anakin87/fine-instructions-ita-70k",
"dataset:mii-llm/argilla-math-preferences-it",
"dataset:ruggsea/wsdm2024-cot-dataset",
"dataset:anakin87/evol-dpo-ita-reranked",
"dataset:anakin87/gemma-vs-gemma-preferences",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:anakin87/gemma-2-2b-neogenesis-ita",
"base_model:quantized:anakin87/gemma-2-2b-neogenesis-ita",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-23T04:38:37Z | ---
license: gemma
language:
- it
- en
base_model: anakin87/gemma-2-2b-neogenesis-ita
pipeline_tag: text-generation
library_name: transformers
datasets:
- efederici/capybara-claude-15k-ita
- anakin87/fine-instructions-ita-70k
- mii-llm/argilla-math-preferences-it
- ruggsea/wsdm2024-cot-dataset
- anakin87/evol-dpo-ita-reranked
- anakin87/gemma-vs-gemma-preferences
- mlabonne/orpo-dpo-mix-40k
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## anakin87/gemma-2-2b-neogenesis-ita - GGUF
This repo contains GGUF format model files for [anakin87/gemma-2-2b-neogenesis-ita](https://huggingface.co/anakin87/gemma-2-2b-neogenesis-ita).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-2-2b-neogenesis-ita-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q2_K.gguf) | Q2_K | 1.230 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma-2-2b-neogenesis-ita-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q3_K_S.gguf) | Q3_K_S | 1.361 GB | very small, high quality loss |
| [gemma-2-2b-neogenesis-ita-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q3_K_M.gguf) | Q3_K_M | 1.462 GB | very small, high quality loss |
| [gemma-2-2b-neogenesis-ita-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q3_K_L.gguf) | Q3_K_L | 1.550 GB | small, substantial quality loss |
| [gemma-2-2b-neogenesis-ita-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q4_0.gguf) | Q4_0 | 1.630 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma-2-2b-neogenesis-ita-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q4_K_S.gguf) | Q4_K_S | 1.639 GB | small, greater quality loss |
| [gemma-2-2b-neogenesis-ita-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q4_K_M.gguf) | Q4_K_M | 1.709 GB | medium, balanced quality - recommended |
| [gemma-2-2b-neogenesis-ita-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q5_0.gguf) | Q5_0 | 1.883 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma-2-2b-neogenesis-ita-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q5_K_S.gguf) | Q5_K_S | 1.883 GB | large, low quality loss - recommended |
| [gemma-2-2b-neogenesis-ita-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q5_K_M.gguf) | Q5_K_M | 1.923 GB | large, very low quality loss - recommended |
| [gemma-2-2b-neogenesis-ita-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q6_K.gguf) | Q6_K | 2.151 GB | very large, extremely low quality loss |
| [gemma-2-2b-neogenesis-ita-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-neogenesis-ita-GGUF/blob/main/gemma-2-2b-neogenesis-ita-Q8_0.gguf) | Q8_0 | 2.784 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gemma-2-2b-neogenesis-ita-GGUF --include "gemma-2-2b-neogenesis-ita-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gemma-2-2b-neogenesis-ita-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ckpt47k-GGUF | tensorblock | 2025-06-19T01:38:20Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:hendrydong/ckpt47k",
"base_model:quantized:hendrydong/ckpt47k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-22T14:37:55Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: hendrydong/ckpt47k
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## hendrydong/ckpt47k - GGUF
This repo contains GGUF format model files for [hendrydong/ckpt47k](https://huggingface.co/hendrydong/ckpt47k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ckpt47k-Q2_K.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q2_K.gguf) | Q2_K | 3.014 GB | smallest, significant quality loss - not recommended for most purposes |
| [ckpt47k-Q3_K_S.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q3_K_S.gguf) | Q3_K_S | 3.491 GB | very small, high quality loss |
| [ckpt47k-Q3_K_M.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q3_K_M.gguf) | Q3_K_M | 3.807 GB | very small, high quality loss |
| [ckpt47k-Q3_K_L.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q3_K_L.gguf) | Q3_K_L | 4.087 GB | small, substantial quality loss |
| [ckpt47k-Q4_0.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q4_0.gguf) | Q4_0 | 4.429 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ckpt47k-Q4_K_S.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q4_K_S.gguf) | Q4_K_S | 4.456 GB | small, greater quality loss |
| [ckpt47k-Q4_K_M.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q4_K_M.gguf) | Q4_K_M | 4.681 GB | medium, balanced quality - recommended |
| [ckpt47k-Q5_0.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q5_0.gguf) | Q5_0 | 5.313 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ckpt47k-Q5_K_S.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q5_K_S.gguf) | Q5_K_S | 5.313 GB | large, low quality loss - recommended |
| [ckpt47k-Q5_K_M.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q5_K_M.gguf) | Q5_K_M | 5.443 GB | large, very low quality loss - recommended |
| [ckpt47k-Q6_K.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q6_K.gguf) | Q6_K | 6.252 GB | very large, extremely low quality loss |
| [ckpt47k-Q8_0.gguf](https://huggingface.co/tensorblock/ckpt47k-GGUF/blob/main/ckpt47k-Q8_0.gguf) | Q8_0 | 8.095 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ckpt47k-GGUF --include "ckpt47k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ckpt47k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF | tensorblock | 2025-06-19T01:37:46Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b",
"base_model:quantized:mlfoundations-dev/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-22T07:20:34Z | ---
library_name: transformers
license: other
base_model: mlfoundations-dev/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: mlfoundations-dev_code-stratos-verified-scaled-0.25_stratos_7b
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlfoundations-dev/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b](https://huggingface.co/mlfoundations-dev/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q2_K.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q4_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q5_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q6_K.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q8_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF --include "mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlfoundations-dev_code-stratos-verified-scaled-0_25_stratos_7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenR1-Qwen-7B-Turkish-GGUF | tensorblock | 2025-06-19T01:37:41Z | 79 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"thinking",
"reasoning",
"deepseek",
"qwen",
"TensorBlock",
"GGUF",
"tr",
"dataset:WiroAI/dolphin-r1-turkish",
"base_model:WiroAI/OpenR1-Qwen-7B-Turkish",
"base_model:quantized:WiroAI/OpenR1-Qwen-7B-Turkish",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-22T06:06:35Z | ---
datasets: WiroAI/dolphin-r1-turkish
library_name: transformers
model_name: OpenR1-Qwen-7B-Turkish
tags:
- generated_from_trainer
- trl
- sft
- thinking
- reasoning
- deepseek
- qwen
- TensorBlock
- GGUF
licence: license
license: apache-2.0
language:
- tr
base_model: WiroAI/OpenR1-Qwen-7B-Turkish
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## WiroAI/OpenR1-Qwen-7B-Turkish - GGUF
This repo contains GGUF format model files for [WiroAI/OpenR1-Qwen-7B-Turkish](https://huggingface.co/WiroAI/OpenR1-Qwen-7B-Turkish).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OpenR1-Qwen-7B-Turkish-Q2_K.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [OpenR1-Qwen-7B-Turkish-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [OpenR1-Qwen-7B-Turkish-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [OpenR1-Qwen-7B-Turkish-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [OpenR1-Qwen-7B-Turkish-Q4_0.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OpenR1-Qwen-7B-Turkish-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [OpenR1-Qwen-7B-Turkish-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [OpenR1-Qwen-7B-Turkish-Q5_0.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OpenR1-Qwen-7B-Turkish-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [OpenR1-Qwen-7B-Turkish-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [OpenR1-Qwen-7B-Turkish-Q6_K.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [OpenR1-Qwen-7B-Turkish-Q8_0.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-Turkish-GGUF/blob/main/OpenR1-Qwen-7B-Turkish-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenR1-Qwen-7B-Turkish-GGUF --include "OpenR1-Qwen-7B-Turkish-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenR1-Qwen-7B-Turkish-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/granite-guardian-3.2-3b-a800m-GGUF | tensorblock | 2025-06-19T01:37:19Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:ibm-granite/granite-guardian-3.2-3b-a800m",
"base_model:quantized:ibm-granite/granite-guardian-3.2-3b-a800m",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-22T02:31:37Z | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: ibm-granite/granite-guardian-3.2-3b-a800m
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ibm-granite/granite-guardian-3.2-3b-a800m - GGUF
This repo contains GGUF format model files for [ibm-granite/granite-guardian-3.2-3b-a800m](https://huggingface.co/ibm-granite/granite-guardian-3.2-3b-a800m).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|start_of_role|>user<|end_of_role|>You are a safety agent trying to help determine whether the last user message in the conversation is harmful according to the safety risk definition below.
<start_of_conversation>
User Message: {prompt}
<end_of_conversation>
Our safety risk definition is defined below:
<start_of_risk_definition>
- The last 'User' message is considered universally harmful.
<end_of_risk_definition>
Is the last user message harmful based on the risk definition? Your answer must be either 'Yes' or 'No'.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [granite-guardian-3.2-3b-a800m-Q2_K.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q2_K.gguf) | Q2_K | 1.241 GB | smallest, significant quality loss - not recommended for most purposes |
| [granite-guardian-3.2-3b-a800m-Q3_K_S.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q3_K_S.gguf) | Q3_K_S | 1.456 GB | very small, high quality loss |
| [granite-guardian-3.2-3b-a800m-Q3_K_M.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q3_K_M.gguf) | Q3_K_M | 1.611 GB | very small, high quality loss |
| [granite-guardian-3.2-3b-a800m-Q3_K_L.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q3_K_L.gguf) | Q3_K_L | 1.742 GB | small, substantial quality loss |
| [granite-guardian-3.2-3b-a800m-Q4_0.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q4_0.gguf) | Q4_0 | 1.884 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [granite-guardian-3.2-3b-a800m-Q4_K_S.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q4_K_S.gguf) | Q4_K_S | 1.900 GB | small, greater quality loss |
| [granite-guardian-3.2-3b-a800m-Q4_K_M.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q4_K_M.gguf) | Q4_K_M | 2.017 GB | medium, balanced quality - recommended |
| [granite-guardian-3.2-3b-a800m-Q5_0.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q5_0.gguf) | Q5_0 | 2.287 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [granite-guardian-3.2-3b-a800m-Q5_K_S.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q5_K_S.gguf) | Q5_K_S | 2.287 GB | large, low quality loss - recommended |
| [granite-guardian-3.2-3b-a800m-Q5_K_M.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q5_K_M.gguf) | Q5_K_M | 2.355 GB | large, very low quality loss - recommended |
| [granite-guardian-3.2-3b-a800m-Q6_K.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q6_K.gguf) | Q6_K | 2.714 GB | very large, extremely low quality loss |
| [granite-guardian-3.2-3b-a800m-Q8_0.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q8_0.gguf) | Q8_0 | 3.513 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/granite-guardian-3.2-3b-a800m-GGUF --include "granite-guardian-3.2-3b-a800m-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/granite-guardian-3.2-3b-a800m-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/s1.1-14B-GGUF | tensorblock | 2025-06-19T01:36:32Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"TensorBlock",
"GGUF",
"base_model:simplescaling/s1.1-14B",
"base_model:quantized:simplescaling/s1.1-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-21T14:26:51Z | ---
base_model: simplescaling/s1.1-14B
library_name: transformers
model_name: Qwen2.5-14B-Instruct-20250308_204224
tags:
- generated_from_trainer
- trl
- sft
- TensorBlock
- GGUF
licence: license
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## simplescaling/s1.1-14B - GGUF
This repo contains GGUF format model files for [simplescaling/s1.1-14B](https://huggingface.co/simplescaling/s1.1-14B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [s1.1-14B-Q2_K.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q2_K.gguf) | Q2_K | 5.770 GB | smallest, significant quality loss - not recommended for most purposes |
| [s1.1-14B-Q3_K_S.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q3_K_S.gguf) | Q3_K_S | 6.660 GB | very small, high quality loss |
| [s1.1-14B-Q3_K_M.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q3_K_M.gguf) | Q3_K_M | 7.339 GB | very small, high quality loss |
| [s1.1-14B-Q3_K_L.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q3_K_L.gguf) | Q3_K_L | 7.925 GB | small, substantial quality loss |
| [s1.1-14B-Q4_0.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q4_0.gguf) | Q4_0 | 8.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [s1.1-14B-Q4_K_S.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q4_K_S.gguf) | Q4_K_S | 8.573 GB | small, greater quality loss |
| [s1.1-14B-Q4_K_M.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q4_K_M.gguf) | Q4_K_M | 8.988 GB | medium, balanced quality - recommended |
| [s1.1-14B-Q5_0.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q5_0.gguf) | Q5_0 | 10.267 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [s1.1-14B-Q5_K_S.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q5_K_S.gguf) | Q5_K_S | 10.267 GB | large, low quality loss - recommended |
| [s1.1-14B-Q5_K_M.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q5_K_M.gguf) | Q5_K_M | 10.509 GB | large, very low quality loss - recommended |
| [s1.1-14B-Q6_K.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q6_K.gguf) | Q6_K | 12.125 GB | very large, extremely low quality loss |
| [s1.1-14B-Q8_0.gguf](https://huggingface.co/tensorblock/s1.1-14B-GGUF/blob/main/s1.1-14B-Q8_0.gguf) | Q8_0 | 15.702 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/s1.1-14B-GGUF --include "s1.1-14B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/s1.1-14B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/HelpingAI-3-GGUF | tensorblock | 2025-06-19T01:36:21Z | 99 | 1 | transformers | [
"transformers",
"gguf",
"HelpingAI",
"Emotionally-Intelligent",
"EQ-focused",
"Conversational",
"SLM",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:HelpingAI/HelpingAI-3",
"base_model:quantized:HelpingAI/HelpingAI-3",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-21T13:01:48Z | ---
license: other
license_name: helpingai
license_link: https://helpingai.co/license
pipeline_tag: text-generation
language:
- en
tags:
- HelpingAI
- Emotionally-Intelligent
- EQ-focused
- Conversational
- SLM
- TensorBlock
- GGUF
library_name: transformers
base_model: HelpingAI/HelpingAI-3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## HelpingAI/HelpingAI-3 - GGUF
This repo contains GGUF format model files for [HelpingAI/HelpingAI-3](https://huggingface.co/HelpingAI/HelpingAI-3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [HelpingAI-3-Q2_K.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q2_K.gguf) | Q2_K | 3.924 GB | smallest, significant quality loss - not recommended for most purposes |
| [HelpingAI-3-Q3_K_S.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q3_K_S.gguf) | Q3_K_S | 4.591 GB | very small, high quality loss |
| [HelpingAI-3-Q3_K_M.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q3_K_M.gguf) | Q3_K_M | 5.052 GB | very small, high quality loss |
| [HelpingAI-3-Q3_K_L.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q3_K_L.gguf) | Q3_K_L | 5.451 GB | small, substantial quality loss |
| [HelpingAI-3-Q4_0.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q4_0.gguf) | Q4_0 | 5.906 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [HelpingAI-3-Q4_K_S.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q4_K_S.gguf) | Q4_K_S | 5.952 GB | small, greater quality loss |
| [HelpingAI-3-Q4_K_M.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q4_K_M.gguf) | Q4_K_M | 6.288 GB | medium, balanced quality - recommended |
| [HelpingAI-3-Q5_0.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q5_0.gguf) | Q5_0 | 7.144 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [HelpingAI-3-Q5_K_S.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q5_K_S.gguf) | Q5_K_S | 7.144 GB | large, low quality loss - recommended |
| [HelpingAI-3-Q5_K_M.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q5_K_M.gguf) | Q5_K_M | 7.341 GB | large, very low quality loss - recommended |
| [HelpingAI-3-Q6_K.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q6_K.gguf) | Q6_K | 8.459 GB | very large, extremely low quality loss |
| [HelpingAI-3-Q8_0.gguf](https://huggingface.co/tensorblock/HelpingAI-3-GGUF/blob/main/HelpingAI-3-Q8_0.gguf) | Q8_0 | 10.955 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/HelpingAI-3-GGUF --include "HelpingAI-3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/HelpingAI-3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF | tensorblock | 2025-06-19T01:36:07Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b",
"base_model:quantized:mlfoundations-dev/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-21T11:03:55Z | ---
library_name: transformers
license: other
base_model: mlfoundations-dev/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0.25_stratos_7b
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlfoundations-dev/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b](https://huggingface.co/mlfoundations-dev/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q2_K.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q4_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q5_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q6_K.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q8_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF --include "mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlfoundations-dev_science-and-puzzle-stratos-verified-scaled-0_25_stratos_7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF | tensorblock | 2025-06-19T01:35:34Z | 132 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:cognitivecomputations/dolphin-r1",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:HuggingFaceTB/smoltalk",
"dataset:cognitivecomputations/samantha-data",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"base_model:cognitivecomputations/Dolphin3.0-R1-Mistral-24B",
"base_model:quantized:cognitivecomputations/Dolphin3.0-R1-Mistral-24B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-21T03:26:18Z | ---
datasets:
- cognitivecomputations/dolphin-r1
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/dolphin-coder
- HuggingFaceTB/smoltalk
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
base_model: cognitivecomputations/Dolphin3.0-R1-Mistral-24B
pipeline_tag: text-generation
library_name: transformers
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## cognitivecomputations/Dolphin3.0-R1-Mistral-24B - GGUF
This repo contains GGUF format model files for [cognitivecomputations/Dolphin3.0-R1-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
<think>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Dolphin3.0-R1-Mistral-24B-Q2_K.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q2_K.gguf) | Q2_K | 8.890 GB | smallest, significant quality loss - not recommended for most purposes |
| [Dolphin3.0-R1-Mistral-24B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q3_K_S.gguf) | Q3_K_S | 10.400 GB | very small, high quality loss |
| [Dolphin3.0-R1-Mistral-24B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q3_K_M.gguf) | Q3_K_M | 11.474 GB | very small, high quality loss |
| [Dolphin3.0-R1-Mistral-24B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q3_K_L.gguf) | Q3_K_L | 12.401 GB | small, substantial quality loss |
| [Dolphin3.0-R1-Mistral-24B-Q4_0.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q4_0.gguf) | Q4_0 | 13.442 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Dolphin3.0-R1-Mistral-24B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q4_K_S.gguf) | Q4_K_S | 13.549 GB | small, greater quality loss |
| [Dolphin3.0-R1-Mistral-24B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q4_K_M.gguf) | Q4_K_M | 14.334 GB | medium, balanced quality - recommended |
| [Dolphin3.0-R1-Mistral-24B-Q5_0.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q5_0.gguf) | Q5_0 | 16.304 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Dolphin3.0-R1-Mistral-24B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q5_K_S.gguf) | Q5_K_S | 16.304 GB | large, low quality loss - recommended |
| [Dolphin3.0-R1-Mistral-24B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q5_K_M.gguf) | Q5_K_M | 16.764 GB | large, very low quality loss - recommended |
| [Dolphin3.0-R1-Mistral-24B-Q6_K.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q6_K.gguf) | Q6_K | 19.346 GB | very large, extremely low quality loss |
| [Dolphin3.0-R1-Mistral-24B-Q8_0.gguf](https://huggingface.co/tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF/blob/main/Dolphin3.0-R1-Mistral-24B-Q8_0.gguf) | Q8_0 | 25.055 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF --include "Dolphin3.0-R1-Mistral-24B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Dolphin3.0-R1-Mistral-24B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/gemma-3-1b-it-abliterated-GGUF | tensorblock | 2025-06-19T01:35:26Z | 119 | 1 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:huihui-ai/gemma-3-1b-it-abliterated",
"base_model:quantized:huihui-ai/gemma-3-1b-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-21T02:32:08Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youβre required to review and
agree to Googleβs usage license. To do this, please ensure youβre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: huihui-ai/gemma-3-1b-it-abliterated
tags:
- chat
- abliterated
- uncensored
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## huihui-ai/gemma-3-1b-it-abliterated - GGUF
This repo contains GGUF format model files for [huihui-ai/gemma-3-1b-it-abliterated](https://huggingface.co/huihui-ai/gemma-3-1b-it-abliterated).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{system_prompt}
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-3-1b-it-abliterated-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q2_K.gguf) | Q2_K | 0.690 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma-3-1b-it-abliterated-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q3_K_S.gguf) | Q3_K_S | 0.689 GB | very small, high quality loss |
| [gemma-3-1b-it-abliterated-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q3_K_M.gguf) | Q3_K_M | 0.722 GB | very small, high quality loss |
| [gemma-3-1b-it-abliterated-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q3_K_L.gguf) | Q3_K_L | 0.752 GB | small, substantial quality loss |
| [gemma-3-1b-it-abliterated-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q4_0.gguf) | Q4_0 | 0.720 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma-3-1b-it-abliterated-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q4_K_S.gguf) | Q4_K_S | 0.781 GB | small, greater quality loss |
| [gemma-3-1b-it-abliterated-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q4_K_M.gguf) | Q4_K_M | 0.806 GB | medium, balanced quality - recommended |
| [gemma-3-1b-it-abliterated-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q5_0.gguf) | Q5_0 | 0.808 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma-3-1b-it-abliterated-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q5_K_S.gguf) | Q5_K_S | 0.836 GB | large, low quality loss - recommended |
| [gemma-3-1b-it-abliterated-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q5_K_M.gguf) | Q5_K_M | 0.851 GB | large, very low quality loss - recommended |
| [gemma-3-1b-it-abliterated-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q6_K.gguf) | Q6_K | 1.012 GB | very large, extremely low quality loss |
| [gemma-3-1b-it-abliterated-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-abliterated-GGUF/blob/main/gemma-3-1b-it-abliterated-Q8_0.gguf) | Q8_0 | 1.069 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gemma-3-1b-it-abliterated-GGUF --include "gemma-3-1b-it-abliterated-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gemma-3-1b-it-abliterated-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/gemma-3-12b-it-GGUF | tensorblock | 2025-06-19T01:35:05Z | 178 | 1 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"image-text-to-text",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-03-13T23:32:38Z | ---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youβre required to review and
agree to Googleβs usage license. To do this, please ensure youβre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-12b-it
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## google/gemma-3-12b-it - GGUF
This repo contains GGUF format model files for [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{system_prompt}
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-3-12b-it-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q2_K.gguf) | Q2_K | 4.768 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma-3-12b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q3_K_S.gguf) | Q3_K_S | 5.458 GB | very small, high quality loss |
| [gemma-3-12b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q3_K_M.gguf) | Q3_K_M | 6.009 GB | very small, high quality loss |
| [gemma-3-12b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q3_K_L.gguf) | Q3_K_L | 6.480 GB | small, substantial quality loss |
| [gemma-3-12b-it-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q4_0.gguf) | Q4_0 | 6.887 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma-3-12b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q4_K_S.gguf) | Q4_K_S | 6.935 GB | small, greater quality loss |
| [gemma-3-12b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q4_K_M.gguf) | Q4_K_M | 7.301 GB | medium, balanced quality - recommended |
| [gemma-3-12b-it-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q5_0.gguf) | Q5_0 | 8.232 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma-3-12b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q5_K_S.gguf) | Q5_K_S | 8.232 GB | large, low quality loss - recommended |
| [gemma-3-12b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q5_K_M.gguf) | Q5_K_M | 8.445 GB | large, very low quality loss - recommended |
| [gemma-3-12b-it-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q6_K.gguf) | Q6_K | 9.661 GB | very large, extremely low quality loss |
| [gemma-3-12b-it-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-3-12b-it-GGUF/blob/main/gemma-3-12b-it-Q8_0.gguf) | Q8_0 | 12.510 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gemma-3-12b-it-GGUF --include "gemma-3-12b-it-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gemma-3-12b-it-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/gemma-3-1b-it-GGUF | tensorblock | 2025-06-19T01:35:00Z | 169 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"gemma3",
"gemma",
"google",
"TensorBlock",
"GGUF",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:quantized:unsloth/gemma-3-1b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-13T22:25:14Z | ---
base_model: unsloth/gemma-3-1b-it
language:
- en
library_name: transformers
license: gemma
tags:
- unsloth
- transformers
- gemma3
- gemma
- google
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## unsloth/gemma-3-1b-it - GGUF
This repo contains GGUF format model files for [unsloth/gemma-3-1b-it](https://huggingface.co/unsloth/gemma-3-1b-it).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{system_prompt}
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-3-1b-it-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q2_K.gguf) | Q2_K | 0.690 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma-3-1b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_S.gguf) | Q3_K_S | 0.689 GB | very small, high quality loss |
| [gemma-3-1b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_M.gguf) | Q3_K_M | 0.722 GB | very small, high quality loss |
| [gemma-3-1b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_L.gguf) | Q3_K_L | 0.752 GB | small, substantial quality loss |
| [gemma-3-1b-it-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_0.gguf) | Q4_0 | 0.720 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma-3-1b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_S.gguf) | Q4_K_S | 0.781 GB | small, greater quality loss |
| [gemma-3-1b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_M.gguf) | Q4_K_M | 0.806 GB | medium, balanced quality - recommended |
| [gemma-3-1b-it-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_0.gguf) | Q5_0 | 0.808 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma-3-1b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_S.gguf) | Q5_K_S | 0.836 GB | large, low quality loss - recommended |
| [gemma-3-1b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_M.gguf) | Q5_K_M | 0.851 GB | large, very low quality loss - recommended |
| [gemma-3-1b-it-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q6_K.gguf) | Q6_K | 1.012 GB | very large, extremely low quality loss |
| [gemma-3-1b-it-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q8_0.gguf) | Q8_0 | 1.069 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gemma-3-1b-it-GGUF --include "gemma-3-1b-it-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gemma-3-1b-it-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/macbert4mdcspell_v1-GGUF | tensorblock | 2025-06-19T01:34:40Z | 101 | 0 | null | [
"gguf",
"csc",
"text-correct",
"chinses-spelling-correct",
"chinese-spelling-check",
"δΈζζΌεηΊ ι",
"ζζ¬ηΊ ι",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"base_model:Macropodus/macbert4mdcspell_v1",
"base_model:quantized:Macropodus/macbert4mdcspell_v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | text-generation | 2025-03-08T23:21:19Z | ---
license: apache-2.0
language:
- zh
base_model: Macropodus/macbert4mdcspell_v1
pipeline_tag: text-generation
tags:
- csc
- text-correct
- chinses-spelling-correct
- chinese-spelling-check
- δΈζζΌεηΊ ι
- ζζ¬ηΊ ι
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Macropodus/macbert4mdcspell_v1 - GGUF
This repo contains GGUF format model files for [Macropodus/macbert4mdcspell_v1](https://huggingface.co/Macropodus/macbert4mdcspell_v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [macbert4mdcspell_v1-Q2_K.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q2_K.gguf) | Q2_K | 0.048 GB | smallest, significant quality loss - not recommended for most purposes |
| [macbert4mdcspell_v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q3_K_S.gguf) | Q3_K_S | 0.052 GB | very small, high quality loss |
| [macbert4mdcspell_v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q3_K_M.gguf) | Q3_K_M | 0.058 GB | very small, high quality loss |
| [macbert4mdcspell_v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q3_K_L.gguf) | Q3_K_L | 0.063 GB | small, substantial quality loss |
| [macbert4mdcspell_v1-Q4_0.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q4_0.gguf) | Q4_0 | 0.064 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [macbert4mdcspell_v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q4_K_S.gguf) | Q4_K_S | 0.064 GB | small, greater quality loss |
| [macbert4mdcspell_v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q4_K_M.gguf) | Q4_K_M | 0.068 GB | medium, balanced quality - recommended |
| [macbert4mdcspell_v1-Q5_0.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q5_0.gguf) | Q5_0 | 0.074 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [macbert4mdcspell_v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q5_K_S.gguf) | Q5_K_S | 0.074 GB | large, low quality loss - recommended |
| [macbert4mdcspell_v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q5_K_M.gguf) | Q5_K_M | 0.076 GB | large, very low quality loss - recommended |
| [macbert4mdcspell_v1-Q6_K.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q6_K.gguf) | Q6_K | 0.085 GB | very large, extremely low quality loss |
| [macbert4mdcspell_v1-Q8_0.gguf](https://huggingface.co/tensorblock/macbert4mdcspell_v1-GGUF/blob/main/macbert4mdcspell_v1-Q8_0.gguf) | Q8_0 | 0.110 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/macbert4mdcspell_v1-GGUF --include "macbert4mdcspell_v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/macbert4mdcspell_v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/stratos-unverified-mix-scaled-1-GGUF | tensorblock | 2025-06-19T01:34:22Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/stratos-unverified-mix-scaled-1",
"base_model:quantized:mlfoundations-dev/stratos-unverified-mix-scaled-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T06:49:23Z | ---
library_name: transformers
license: apache-2.0
base_model: mlfoundations-dev/stratos-unverified-mix-scaled-1
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: stratos-unverified-mix-scaled-1
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlfoundations-dev/stratos-unverified-mix-scaled-1 - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/stratos-unverified-mix-scaled-1](https://huggingface.co/mlfoundations-dev/stratos-unverified-mix-scaled-1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [stratos-unverified-mix-scaled-1-Q2_K.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [stratos-unverified-mix-scaled-1-Q3_K_S.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [stratos-unverified-mix-scaled-1-Q3_K_M.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [stratos-unverified-mix-scaled-1-Q3_K_L.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [stratos-unverified-mix-scaled-1-Q4_0.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stratos-unverified-mix-scaled-1-Q4_K_S.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [stratos-unverified-mix-scaled-1-Q4_K_M.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [stratos-unverified-mix-scaled-1-Q5_0.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stratos-unverified-mix-scaled-1-Q5_K_S.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [stratos-unverified-mix-scaled-1-Q5_K_M.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [stratos-unverified-mix-scaled-1-Q6_K.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [stratos-unverified-mix-scaled-1-Q8_0.gguf](https://huggingface.co/tensorblock/stratos-unverified-mix-scaled-1-GGUF/blob/main/stratos-unverified-mix-scaled-1-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/stratos-unverified-mix-scaled-1-GGUF --include "stratos-unverified-mix-scaled-1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/stratos-unverified-mix-scaled-1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/math-stratos-verified-scaled-0.125-GGUF | tensorblock | 2025-06-19T01:32:44Z | 71 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/math-stratos-verified-scaled-0.125",
"base_model:quantized:mlfoundations-dev/math-stratos-verified-scaled-0.125",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-07T08:24:53Z | ---
library_name: transformers
license: apache-2.0
base_model: mlfoundations-dev/math-stratos-verified-scaled-0.125
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: math-stratos-verified-scaled-0.125
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlfoundations-dev/math-stratos-verified-scaled-0.125 - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/math-stratos-verified-scaled-0.125](https://huggingface.co/mlfoundations-dev/math-stratos-verified-scaled-0.125).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [math-stratos-verified-scaled-0.125-Q2_K.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [math-stratos-verified-scaled-0.125-Q3_K_S.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [math-stratos-verified-scaled-0.125-Q3_K_M.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [math-stratos-verified-scaled-0.125-Q3_K_L.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [math-stratos-verified-scaled-0.125-Q4_0.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [math-stratos-verified-scaled-0.125-Q4_K_S.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [math-stratos-verified-scaled-0.125-Q4_K_M.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [math-stratos-verified-scaled-0.125-Q5_0.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [math-stratos-verified-scaled-0.125-Q5_K_S.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [math-stratos-verified-scaled-0.125-Q5_K_M.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [math-stratos-verified-scaled-0.125-Q6_K.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [math-stratos-verified-scaled-0.125-Q8_0.gguf](https://huggingface.co/tensorblock/math-stratos-verified-scaled-0.125-GGUF/blob/main/math-stratos-verified-scaled-0.125-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/math-stratos-verified-scaled-0.125-GGUF --include "math-stratos-verified-scaled-0.125-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/math-stratos-verified-scaled-0.125-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ECE_Poirot-GGUF | tensorblock | 2025-06-19T01:32:37Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:SpaceYL/ECE_Poirot",
"base_model:quantized:SpaceYL/ECE_Poirot",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-07T07:37:49Z | ---
base_model: SpaceYL/ECE_Poirot
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
license: apache-2.0
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## SpaceYL/ECE_Poirot - GGUF
This repo contains GGUF format model files for [SpaceYL/ECE_Poirot](https://huggingface.co/SpaceYL/ECE_Poirot).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ECE_Poirot-Q2_K.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q2_K.gguf) | Q2_K | 0.676 GB | smallest, significant quality loss - not recommended for most purposes |
| [ECE_Poirot-Q3_K_S.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q3_K_S.gguf) | Q3_K_S | 0.761 GB | very small, high quality loss |
| [ECE_Poirot-Q3_K_M.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q3_K_M.gguf) | Q3_K_M | 0.824 GB | very small, high quality loss |
| [ECE_Poirot-Q3_K_L.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q3_K_L.gguf) | Q3_K_L | 0.880 GB | small, substantial quality loss |
| [ECE_Poirot-Q4_0.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q4_0.gguf) | Q4_0 | 0.935 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ECE_Poirot-Q4_K_S.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q4_K_S.gguf) | Q4_K_S | 0.940 GB | small, greater quality loss |
| [ECE_Poirot-Q4_K_M.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q4_K_M.gguf) | Q4_K_M | 0.986 GB | medium, balanced quality - recommended |
| [ECE_Poirot-Q5_0.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q5_0.gguf) | Q5_0 | 1.099 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ECE_Poirot-Q5_K_S.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q5_K_S.gguf) | Q5_K_S | 1.099 GB | large, low quality loss - recommended |
| [ECE_Poirot-Q5_K_M.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q5_K_M.gguf) | Q5_K_M | 1.125 GB | large, very low quality loss - recommended |
| [ECE_Poirot-Q6_K.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q6_K.gguf) | Q6_K | 1.273 GB | very large, extremely low quality loss |
| [ECE_Poirot-Q8_0.gguf](https://huggingface.co/tensorblock/ECE_Poirot-GGUF/blob/main/ECE_Poirot-Q8_0.gguf) | Q8_0 | 1.647 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ECE_Poirot-GGUF --include "ECE_Poirot-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ECE_Poirot-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/qwen25-math-7b-instruct-GGUF | tensorblock | 2025-06-19T01:32:25Z | 63 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:MInference/qwen25-math-7b-instruct",
"base_model:quantized:MInference/qwen25-math-7b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-07T06:05:03Z | ---
base_model: MInference/qwen25-math-7b-instruct
language:
- en
pipeline_tag: text-generation
tags:
- chat
- TensorBlock
- GGUF
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct/blob/main/LICENSE
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MInference/qwen25-math-7b-instruct - GGUF
This repo contains GGUF format model files for [MInference/qwen25-math-7b-instruct](https://huggingface.co/MInference/qwen25-math-7b-instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [qwen25-math-7b-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [qwen25-math-7b-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [qwen25-math-7b-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [qwen25-math-7b-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [qwen25-math-7b-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [qwen25-math-7b-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [qwen25-math-7b-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [qwen25-math-7b-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [qwen25-math-7b-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [qwen25-math-7b-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [qwen25-math-7b-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [qwen25-math-7b-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/qwen25-math-7b-instruct-GGUF/blob/main/qwen25-math-7b-instruct-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/qwen25-math-7b-instruct-GGUF --include "qwen25-math-7b-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/qwen25-math-7b-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF | tensorblock | 2025-06-19T01:32:18Z | 451 | 1 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022",
"base_model:quantized:mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-07T05:23:28Z | ---
library_name: transformers
license: apache-2.0
base_model: mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022 - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022](https://huggingface.co/mlfoundations-dev/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q2_K.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q2_K.gguf) | Q2_K | 2.723 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q3_K_S.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q3_K_S.gguf) | Q3_K_S | 3.169 GB | very small, high quality loss |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q3_K_M.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q3_K_M.gguf) | Q3_K_M | 3.523 GB | very small, high quality loss |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q3_K_L.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q3_K_L.gguf) | Q3_K_L | 3.826 GB | small, substantial quality loss |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q4_0.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q4_0.gguf) | Q4_0 | 4.113 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q4_K_S.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q4_K_S.gguf) | Q4_K_S | 4.145 GB | small, greater quality loss |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q4_K_M.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q4_K_M.gguf) | Q4_K_M | 4.373 GB | medium, balanced quality - recommended |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q5_0.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q5_0.gguf) | Q5_0 | 5.002 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q5_K_S.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q5_K_S.gguf) | Q5_K_S | 5.002 GB | large, low quality loss - recommended |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q5_K_M.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q5_K_M.gguf) | Q5_K_M | 5.136 GB | large, very low quality loss - recommended |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q6_K.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q6_K.gguf) | Q6_K | 5.947 GB | very large, extremely low quality loss |
| [mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q8_0.gguf](https://huggingface.co/tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF/blob/main/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q8_0.gguf) | Q8_0 | 7.703 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF --include "mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mistral_7b_0-3_oh-dcft-v3.1-claude-3-5-sonnet-20241022-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/0x-lite-GGUF | tensorblock | 2025-06-19T01:31:39Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"zh",
"dataset:lmsys/lmsys-chat-1m",
"base_model:ozone-research/0x-lite",
"base_model:quantized:ozone-research/0x-lite",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-06T22:04:09Z | ---
library_name: transformers
datasets:
- lmsys/lmsys-chat-1m
base_model: ozone-research/0x-lite
pipeline_tag: text-generation
language:
- en
- zh
license: apache-2.0
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ozone-research/0x-lite - GGUF
This repo contains GGUF format model files for [ozone-research/0x-lite](https://huggingface.co/ozone-research/0x-lite).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [0x-lite-Q2_K.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q2_K.gguf) | Q2_K | 5.770 GB | smallest, significant quality loss - not recommended for most purposes |
| [0x-lite-Q3_K_S.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q3_K_S.gguf) | Q3_K_S | 6.660 GB | very small, high quality loss |
| [0x-lite-Q3_K_M.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q3_K_M.gguf) | Q3_K_M | 7.339 GB | very small, high quality loss |
| [0x-lite-Q3_K_L.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q3_K_L.gguf) | Q3_K_L | 7.925 GB | small, substantial quality loss |
| [0x-lite-Q4_0.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q4_0.gguf) | Q4_0 | 8.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [0x-lite-Q4_K_S.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q4_K_S.gguf) | Q4_K_S | 8.573 GB | small, greater quality loss |
| [0x-lite-Q4_K_M.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q4_K_M.gguf) | Q4_K_M | 8.988 GB | medium, balanced quality - recommended |
| [0x-lite-Q5_0.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q5_0.gguf) | Q5_0 | 10.267 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [0x-lite-Q5_K_S.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q5_K_S.gguf) | Q5_K_S | 10.267 GB | large, low quality loss - recommended |
| [0x-lite-Q5_K_M.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q5_K_M.gguf) | Q5_K_M | 10.509 GB | large, very low quality loss - recommended |
| [0x-lite-Q6_K.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q6_K.gguf) | Q6_K | 12.125 GB | very large, extremely low quality loss |
| [0x-lite-Q8_0.gguf](https://huggingface.co/tensorblock/0x-lite-GGUF/blob/main/0x-lite-Q8_0.gguf) | Q8_0 | 15.702 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/0x-lite-GGUF --include "0x-lite-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/0x-lite-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/YuLan-Mini-GGUF | tensorblock | 2025-06-19T01:31:38Z | 276 | 1 | transformers | [
"transformers",
"gguf",
"code",
"math",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"zh",
"dataset:yulan-team/YuLan-Mini-Datasets",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:bigcode/the-stack-v2",
"dataset:mlfoundations/dclm-baseline-1.0",
"dataset:math-ai/AutoMathText",
"dataset:gair-prox/open-web-math-pro",
"dataset:RUC-AIBOX/long_form_thought_data_5k",
"dataset:internlm/Lean-Workbook",
"dataset:internlm/Lean-Github",
"dataset:deepseek-ai/DeepSeek-Prover-V1",
"dataset:ScalableMath/Lean-STaR-base",
"dataset:ScalableMath/Lean-STaR-plus",
"dataset:ScalableMath/Lean-CoT-base",
"dataset:ScalableMath/Lean-CoT-plus",
"dataset:opencsg/chinese-fineweb-edu",
"dataset:liwu/MNBVC",
"dataset:vikp/textbook_quality_programming",
"dataset:HuggingFaceTB/smollm-corpus",
"dataset:OpenCoder-LLM/opc-annealing-corpus",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:XinyaoHu/AMPS_mathematica",
"dataset:deepmind/math_dataset",
"dataset:mrfakename/basic-math-10m",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:MU-NLPC/Calc-ape210k",
"dataset:manu/project_gutenberg",
"dataset:storytracer/LoC-PD-Books",
"dataset:allenai/dolma",
"base_model:yulan-team/YuLan-Mini",
"base_model:quantized:yulan-team/YuLan-Mini",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-06T22:04:03Z | ---
license: mit
library_name: transformers
pipeline_tag: text-generation
datasets:
- yulan-team/YuLan-Mini-Datasets
- HuggingFaceFW/fineweb-edu
- bigcode/the-stack-v2
- mlfoundations/dclm-baseline-1.0
- math-ai/AutoMathText
- gair-prox/open-web-math-pro
- RUC-AIBOX/long_form_thought_data_5k
- internlm/Lean-Workbook
- internlm/Lean-Github
- deepseek-ai/DeepSeek-Prover-V1
- ScalableMath/Lean-STaR-base
- ScalableMath/Lean-STaR-plus
- ScalableMath/Lean-CoT-base
- ScalableMath/Lean-CoT-plus
- opencsg/chinese-fineweb-edu
- liwu/MNBVC
- vikp/textbook_quality_programming
- HuggingFaceTB/smollm-corpus
- OpenCoder-LLM/opc-annealing-corpus
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- XinyaoHu/AMPS_mathematica
- deepmind/math_dataset
- mrfakename/basic-math-10m
- microsoft/orca-math-word-problems-200k
- AI-MO/NuminaMath-CoT
- HuggingFaceTB/cosmopedia
- MU-NLPC/Calc-ape210k
- manu/project_gutenberg
- storytracer/LoC-PD-Books
- allenai/dolma
language:
- en
- zh
tags:
- code
- math
- TensorBlock
- GGUF
arxiv: 2412.17743
base_model: yulan-team/YuLan-Mini
model-index:
- name: YuLan-Mini
results:
- task:
type: text-generation
dataset:
name: HumanEval
type: openai_humaneval
metrics:
- type: pass@1
value: 0.64
name: pass@1
verified: false
- task:
type: text-generation
dataset:
name: MBPP
type: mbpp
metrics:
- type: pass@1
value: 0.659
name: pass@1
verified: false
- task:
type: text-generation
dataset:
name: MATH-500
type: math-500
metrics:
- type: maj@1
value: 0.378
name: maj@1
verified: false
- task:
type: text-generation
dataset:
name: GSM8K
type: gsm8k
metrics:
- type: maj@1
value: 0.684
name: maj@1
verified: false
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## yulan-team/YuLan-Mini - GGUF
This repo contains GGUF format model files for [yulan-team/YuLan-Mini](https://huggingface.co/yulan-team/YuLan-Mini).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>
<|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [YuLan-Mini-Q2_K.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q2_K.gguf) | Q2_K | 1.468 GB | smallest, significant quality loss - not recommended for most purposes |
| [YuLan-Mini-Q3_K_S.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q3_K_S.gguf) | Q3_K_S | 1.463 GB | very small, high quality loss |
| [YuLan-Mini-Q3_K_M.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q3_K_M.gguf) | Q3_K_M | 1.560 GB | very small, high quality loss |
| [YuLan-Mini-Q3_K_L.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q3_K_L.gguf) | Q3_K_L | 1.606 GB | small, substantial quality loss |
| [YuLan-Mini-Q4_0.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q4_0.gguf) | Q4_0 | 1.463 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [YuLan-Mini-Q4_K_S.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q4_K_S.gguf) | Q4_K_S | 1.746 GB | small, greater quality loss |
| [YuLan-Mini-Q4_K_M.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q4_K_M.gguf) | Q4_K_M | 1.846 GB | medium, balanced quality - recommended |
| [YuLan-Mini-Q5_0.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q5_0.gguf) | Q5_0 | 1.742 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [YuLan-Mini-Q5_K_S.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q5_K_S.gguf) | Q5_K_S | 1.882 GB | large, low quality loss - recommended |
| [YuLan-Mini-Q5_K_M.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q5_K_M.gguf) | Q5_K_M | 1.969 GB | large, very low quality loss - recommended |
| [YuLan-Mini-Q6_K.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q6_K.gguf) | Q6_K | 2.580 GB | very large, extremely low quality loss |
| [YuLan-Mini-Q8_0.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q8_0.gguf) | Q8_0 | 2.580 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/YuLan-Mini-GGUF --include "YuLan-Mini-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/YuLan-Mini-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Subsets and Splits