modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-24 18:27:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 476
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-24 18:26:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tensorblock/math_gpt2_sft-GGUF | tensorblock | 2025-04-21T00:34:23Z | 67 | 0 | null | [
"gguf",
"maths",
"gpt2",
"mathgpt2",
"trl",
"sft",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:meta-math/MetaMathQA",
"dataset:ArtifactAI/arxiv-math-instruct-50k",
"base_model:Sharathhebbar24/math_gpt2_sft",
"base_model:quantized:Sharathhebbar24/math_gpt2_sft",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-21T19:59:06Z | ---
language:
- en
license: apache-2.0
tags:
- maths
- gpt2
- mathgpt2
- trl
- sft
- TensorBlock
- GGUF
datasets:
- meta-math/MetaMathQA
- ArtifactAI/arxiv-math-instruct-50k
pipeline_tag: text-generation
widget:
- text: Which motion is formed by an incident particle?
example_title: Example 1
- text: What type of diffusional modeling is used for diffusion?
example_title: Example 2
base_model: Sharathhebbar24/math_gpt2_sft
model-index:
- name: math_gpt2_sft
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 22.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 30.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.62
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Sharathhebbar24/math_gpt2_sft - GGUF
This repo contains GGUF format model files for [Sharathhebbar24/math_gpt2_sft](https://huggingface.co/Sharathhebbar24/math_gpt2_sft).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [math_gpt2_sft-Q2_K.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q2_K.gguf) | Q2_K | 0.081 GB | smallest, significant quality loss - not recommended for most purposes |
| [math_gpt2_sft-Q3_K_S.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q3_K_S.gguf) | Q3_K_S | 0.090 GB | very small, high quality loss |
| [math_gpt2_sft-Q3_K_M.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q3_K_M.gguf) | Q3_K_M | 0.098 GB | very small, high quality loss |
| [math_gpt2_sft-Q3_K_L.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q3_K_L.gguf) | Q3_K_L | 0.102 GB | small, substantial quality loss |
| [math_gpt2_sft-Q4_0.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q4_0.gguf) | Q4_0 | 0.107 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [math_gpt2_sft-Q4_K_S.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q4_K_S.gguf) | Q4_K_S | 0.107 GB | small, greater quality loss |
| [math_gpt2_sft-Q4_K_M.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q4_K_M.gguf) | Q4_K_M | 0.113 GB | medium, balanced quality - recommended |
| [math_gpt2_sft-Q5_0.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q5_0.gguf) | Q5_0 | 0.122 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [math_gpt2_sft-Q5_K_S.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q5_K_S.gguf) | Q5_K_S | 0.122 GB | large, low quality loss - recommended |
| [math_gpt2_sft-Q5_K_M.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q5_K_M.gguf) | Q5_K_M | 0.127 GB | large, very low quality loss - recommended |
| [math_gpt2_sft-Q6_K.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q6_K.gguf) | Q6_K | 0.138 GB | very large, extremely low quality loss |
| [math_gpt2_sft-Q8_0.gguf](https://huggingface.co/tensorblock/math_gpt2_sft-GGUF/blob/main/math_gpt2_sft-Q8_0.gguf) | Q8_0 | 0.178 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/math_gpt2_sft-GGUF --include "math_gpt2_sft-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/math_gpt2_sft-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Truthful_DPO_MOE_19B-GGUF | tensorblock | 2025-04-21T00:34:18Z | 25 | 0 | null | [
"gguf",
"moe",
"DPO",
"RL-TUNED",
"TensorBlock",
"GGUF",
"base_model:yunconglong/Truthful_DPO_MOE_19B",
"base_model:quantized:yunconglong/Truthful_DPO_MOE_19B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T19:42:33Z | ---
license: other
tags:
- moe
- DPO
- RL-TUNED
- TensorBlock
- GGUF
base_model: yunconglong/Truthful_DPO_MOE_19B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## yunconglong/Truthful_DPO_MOE_19B - GGUF
This repo contains GGUF format model files for [yunconglong/Truthful_DPO_MOE_19B](https://huggingface.co/yunconglong/Truthful_DPO_MOE_19B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Truthful_DPO_MOE_19B-Q2_K.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q2_K.gguf) | Q2_K | 7.066 GB | smallest, significant quality loss - not recommended for most purposes |
| [Truthful_DPO_MOE_19B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q3_K_S.gguf) | Q3_K_S | 8.299 GB | very small, high quality loss |
| [Truthful_DPO_MOE_19B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q3_K_M.gguf) | Q3_K_M | 9.227 GB | very small, high quality loss |
| [Truthful_DPO_MOE_19B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q3_K_L.gguf) | Q3_K_L | 10.012 GB | small, substantial quality loss |
| [Truthful_DPO_MOE_19B-Q4_0.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q4_0.gguf) | Q4_0 | 10.830 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Truthful_DPO_MOE_19B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q4_K_S.gguf) | Q4_K_S | 10.920 GB | small, greater quality loss |
| [Truthful_DPO_MOE_19B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q4_K_M.gguf) | Q4_K_M | 11.583 GB | medium, balanced quality - recommended |
| [Truthful_DPO_MOE_19B-Q5_0.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q5_0.gguf) | Q5_0 | 13.212 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Truthful_DPO_MOE_19B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q5_K_S.gguf) | Q5_K_S | 13.212 GB | large, low quality loss - recommended |
| [Truthful_DPO_MOE_19B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q5_K_M.gguf) | Q5_K_M | 13.600 GB | large, very low quality loss - recommended |
| [Truthful_DPO_MOE_19B-Q6_K.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q6_K.gguf) | Q6_K | 15.743 GB | very large, extremely low quality loss |
| [Truthful_DPO_MOE_19B-Q8_0.gguf](https://huggingface.co/tensorblock/Truthful_DPO_MOE_19B-GGUF/blob/main/Truthful_DPO_MOE_19B-Q8_0.gguf) | Q8_0 | 20.390 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Truthful_DPO_MOE_19B-GGUF --include "Truthful_DPO_MOE_19B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Truthful_DPO_MOE_19B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/PiVoT-MoE-GGUF | tensorblock | 2025-04-21T00:34:17Z | 28 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:maywell/PiVoT-MoE",
"base_model:quantized:maywell/PiVoT-MoE",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T19:38:00Z | ---
license: cc-by-nc-4.0
base_model: maywell/PiVoT-MoE
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## maywell/PiVoT-MoE - GGUF
This repo contains GGUF format model files for [maywell/PiVoT-MoE](https://huggingface.co/maywell/PiVoT-MoE).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
{system_prompt}{system_prompt}### Instruction: {prompt}### Response:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [PiVoT-MoE-Q2_K.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q2_K.gguf) | Q2_K | 13.189 GB | smallest, significant quality loss - not recommended for most purposes |
| [PiVoT-MoE-Q3_K_S.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q3_K_S.gguf) | Q3_K_S | 15.568 GB | very small, high quality loss |
| [PiVoT-MoE-Q3_K_M.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q3_K_M.gguf) | Q3_K_M | 17.288 GB | very small, high quality loss |
| [PiVoT-MoE-Q3_K_L.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q3_K_L.gguf) | Q3_K_L | 18.734 GB | small, substantial quality loss |
| [PiVoT-MoE-Q4_0.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q4_0.gguf) | Q4_0 | 20.345 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [PiVoT-MoE-Q4_K_S.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q4_K_S.gguf) | Q4_K_S | 20.523 GB | small, greater quality loss |
| [PiVoT-MoE-Q4_K_M.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q4_K_M.gguf) | Q4_K_M | 21.824 GB | medium, balanced quality - recommended |
| [PiVoT-MoE-Q5_0.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q5_0.gguf) | Q5_0 | 24.840 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [PiVoT-MoE-Q5_K_S.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q5_K_S.gguf) | Q5_K_S | 24.840 GB | large, low quality loss - recommended |
| [PiVoT-MoE-Q5_K_M.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q5_K_M.gguf) | Q5_K_M | 25.603 GB | large, very low quality loss - recommended |
| [PiVoT-MoE-Q6_K.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q6_K.gguf) | Q6_K | 29.617 GB | very large, extremely low quality loss |
| [PiVoT-MoE-Q8_0.gguf](https://huggingface.co/tensorblock/PiVoT-MoE-GGUF/blob/main/PiVoT-MoE-Q8_0.gguf) | Q8_0 | 38.360 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/PiVoT-MoE-GGUF --include "PiVoT-MoE-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/PiVoT-MoE-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/open-llama-2-ko-7b-kullm-GGUF | tensorblock | 2025-04-21T00:34:16Z | 30 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:kimjaewon/open-llama-2-ko-7b-kullm",
"base_model:quantized:kimjaewon/open-llama-2-ko-7b-kullm",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T19:25:45Z | ---
base_model: kimjaewon/open-llama-2-ko-7b-kullm
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kimjaewon/open-llama-2-ko-7b-kullm - GGUF
This repo contains GGUF format model files for [kimjaewon/open-llama-2-ko-7b-kullm](https://huggingface.co/kimjaewon/open-llama-2-ko-7b-kullm).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [open-llama-2-ko-7b-kullm-Q2_K.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q2_K.gguf) | Q2_K | 2.601 GB | smallest, significant quality loss - not recommended for most purposes |
| [open-llama-2-ko-7b-kullm-Q3_K_S.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q3_K_S.gguf) | Q3_K_S | 3.022 GB | very small, high quality loss |
| [open-llama-2-ko-7b-kullm-Q3_K_M.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q3_K_M.gguf) | Q3_K_M | 3.372 GB | very small, high quality loss |
| [open-llama-2-ko-7b-kullm-Q3_K_L.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q3_K_L.gguf) | Q3_K_L | 3.671 GB | small, substantial quality loss |
| [open-llama-2-ko-7b-kullm-Q4_0.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q4_0.gguf) | Q4_0 | 3.907 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [open-llama-2-ko-7b-kullm-Q4_K_S.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q4_K_S.gguf) | Q4_K_S | 3.938 GB | small, greater quality loss |
| [open-llama-2-ko-7b-kullm-Q4_K_M.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q4_K_M.gguf) | Q4_K_M | 4.163 GB | medium, balanced quality - recommended |
| [open-llama-2-ko-7b-kullm-Q5_0.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q5_0.gguf) | Q5_0 | 4.741 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [open-llama-2-ko-7b-kullm-Q5_K_S.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q5_K_S.gguf) | Q5_K_S | 4.741 GB | large, low quality loss - recommended |
| [open-llama-2-ko-7b-kullm-Q5_K_M.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q5_K_M.gguf) | Q5_K_M | 4.872 GB | large, very low quality loss - recommended |
| [open-llama-2-ko-7b-kullm-Q6_K.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q6_K.gguf) | Q6_K | 5.626 GB | very large, extremely low quality loss |
| [open-llama-2-ko-7b-kullm-Q8_0.gguf](https://huggingface.co/tensorblock/open-llama-2-ko-7b-kullm-GGUF/blob/main/open-llama-2-ko-7b-kullm-Q8_0.gguf) | Q8_0 | 7.286 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/open-llama-2-ko-7b-kullm-GGUF --include "open-llama-2-ko-7b-kullm-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/open-llama-2-ko-7b-kullm-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF | tensorblock | 2025-04-21T00:34:15Z | 32 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:wkshin89/mistral-7b-instruct-ko-test-v0.3",
"base_model:quantized:wkshin89/mistral-7b-instruct-ko-test-v0.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T18:58:40Z | ---
base_model: wkshin89/mistral-7b-instruct-ko-test-v0.3
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## wkshin89/mistral-7b-instruct-ko-test-v0.3 - GGUF
This repo contains GGUF format model files for [wkshin89/mistral-7b-instruct-ko-test-v0.3](https://huggingface.co/wkshin89/mistral-7b-instruct-ko-test-v0.3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mistral-7b-instruct-ko-test-v0.3-Q2_K.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-instruct-ko-test-v0.3-Q3_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [mistral-7b-instruct-ko-test-v0.3-Q3_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [mistral-7b-instruct-ko-test-v0.3-Q3_K_L.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [mistral-7b-instruct-ko-test-v0.3-Q4_0.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-instruct-ko-test-v0.3-Q4_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [mistral-7b-instruct-ko-test-v0.3-Q4_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [mistral-7b-instruct-ko-test-v0.3-Q5_0.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-instruct-ko-test-v0.3-Q5_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [mistral-7b-instruct-ko-test-v0.3-Q5_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [mistral-7b-instruct-ko-test-v0.3-Q6_K.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [mistral-7b-instruct-ko-test-v0.3-Q8_0.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF/blob/main/mistral-7b-instruct-ko-test-v0.3-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF --include "mistral-7b-instruct-ko-test-v0.3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mistral-7b-instruct-ko-test-v0.3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MarcBeagle-7B-GGUF | tensorblock | 2025-04-21T00:34:07Z | 44 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MarcMistral-7B",
"leveldevai/TurdusBeagle-7B",
"TensorBlock",
"GGUF",
"base_model:leveldevai/MarcBeagle-7B",
"base_model:quantized:leveldevai/MarcBeagle-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T17:02:06Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MarcMistral-7B
- leveldevai/TurdusBeagle-7B
- TensorBlock
- GGUF
base_model: leveldevai/MarcBeagle-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## leveldevai/MarcBeagle-7B - GGUF
This repo contains GGUF format model files for [leveldevai/MarcBeagle-7B](https://huggingface.co/leveldevai/MarcBeagle-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MarcBeagle-7B-Q2_K.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [MarcBeagle-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [MarcBeagle-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [MarcBeagle-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [MarcBeagle-7B-Q4_0.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MarcBeagle-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [MarcBeagle-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [MarcBeagle-7B-Q5_0.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MarcBeagle-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [MarcBeagle-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [MarcBeagle-7B-Q6_K.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [MarcBeagle-7B-Q8_0.gguf](https://huggingface.co/tensorblock/MarcBeagle-7B-GGUF/blob/main/MarcBeagle-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MarcBeagle-7B-GGUF --include "MarcBeagle-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MarcBeagle-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Maylin-7b-GGUF | tensorblock | 2025-04-21T00:34:06Z | 27 | 0 | null | [
"gguf",
"mistral",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:Azazelle/Maylin-7b",
"base_model:quantized:Azazelle/Maylin-7b",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-21T16:22:50Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
- TensorBlock
- GGUF
license: cc-by-4.0
base_model: Azazelle/Maylin-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Azazelle/Maylin-7b - GGUF
This repo contains GGUF format model files for [Azazelle/Maylin-7b](https://huggingface.co/Azazelle/Maylin-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Maylin-7b-Q2_K.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Maylin-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Maylin-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Maylin-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Maylin-7b-Q4_0.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Maylin-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Maylin-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Maylin-7b-Q5_0.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Maylin-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Maylin-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Maylin-7b-Q6_K.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Maylin-7b-Q8_0.gguf](https://huggingface.co/tensorblock/Maylin-7b-GGUF/blob/main/Maylin-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Maylin-7b-GGUF --include "Maylin-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Maylin-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF | tensorblock | 2025-04-21T00:34:03Z | 32 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:diffnamehard/Psyfighter2-Noromaid-ties-Capybara-13B",
"base_model:quantized:diffnamehard/Psyfighter2-Noromaid-ties-Capybara-13B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T12:49:21Z | ---
license: cc-by-nc-4.0
base_model: diffnamehard/Psyfighter2-Noromaid-ties-Capybara-13B
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## diffnamehard/Psyfighter2-Noromaid-ties-Capybara-13B - GGUF
This repo contains GGUF format model files for [diffnamehard/Psyfighter2-Noromaid-ties-Capybara-13B](https://huggingface.co/diffnamehard/Psyfighter2-Noromaid-ties-Capybara-13B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q2_K.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q4_0.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q5_0.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q6_K.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [Psyfighter2-Noromaid-ties-Capybara-13B-Q8_0.gguf](https://huggingface.co/tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF/blob/main/Psyfighter2-Noromaid-ties-Capybara-13B-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF --include "Psyfighter2-Noromaid-ties-Capybara-13B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Psyfighter2-Noromaid-ties-Capybara-13B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/GML-Mistral-merged-v1-GGUF | tensorblock | 2025-04-21T00:33:53Z | 71 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:zyh3826/GML-Mistral-merged-v1",
"base_model:quantized:zyh3826/GML-Mistral-merged-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T10:23:14Z | ---
license: apache-2.0
tags:
- merge
- TensorBlock
- GGUF
base_model: zyh3826/GML-Mistral-merged-v1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## zyh3826/GML-Mistral-merged-v1 - GGUF
This repo contains GGUF format model files for [zyh3826/GML-Mistral-merged-v1](https://huggingface.co/zyh3826/GML-Mistral-merged-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [GML-Mistral-merged-v1-Q2_K.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q2_K.gguf) | Q2_K | 3.361 GB | smallest, significant quality loss - not recommended for most purposes |
| [GML-Mistral-merged-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q3_K_S.gguf) | Q3_K_S | 3.915 GB | very small, high quality loss |
| [GML-Mistral-merged-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q3_K_M.gguf) | Q3_K_M | 4.354 GB | very small, high quality loss |
| [GML-Mistral-merged-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q3_K_L.gguf) | Q3_K_L | 4.736 GB | small, substantial quality loss |
| [GML-Mistral-merged-v1-Q4_0.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q4_0.gguf) | Q4_0 | 5.091 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [GML-Mistral-merged-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q4_K_S.gguf) | Q4_K_S | 5.129 GB | small, greater quality loss |
| [GML-Mistral-merged-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q4_K_M.gguf) | Q4_K_M | 5.415 GB | medium, balanced quality - recommended |
| [GML-Mistral-merged-v1-Q5_0.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q5_0.gguf) | Q5_0 | 6.198 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [GML-Mistral-merged-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q5_K_S.gguf) | Q5_K_S | 6.198 GB | large, low quality loss - recommended |
| [GML-Mistral-merged-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q5_K_M.gguf) | Q5_K_M | 6.365 GB | large, very low quality loss - recommended |
| [GML-Mistral-merged-v1-Q6_K.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q6_K.gguf) | Q6_K | 7.374 GB | very large, extremely low quality loss |
| [GML-Mistral-merged-v1-Q8_0.gguf](https://huggingface.co/tensorblock/GML-Mistral-merged-v1-GGUF/blob/main/GML-Mistral-merged-v1-Q8_0.gguf) | Q8_0 | 9.550 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/GML-Mistral-merged-v1-GGUF --include "GML-Mistral-merged-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/GML-Mistral-merged-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Kunoichi-DPO-7B-GGUF | tensorblock | 2025-04-21T00:33:52Z | 40 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:SanjiWatsuki/Kunoichi-DPO-7B",
"base_model:quantized:SanjiWatsuki/Kunoichi-DPO-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T10:13:29Z | ---
license: cc-by-nc-4.0
tags:
- TensorBlock
- GGUF
base_model: SanjiWatsuki/Kunoichi-DPO-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## SanjiWatsuki/Kunoichi-DPO-7B - GGUF
This repo contains GGUF format model files for [SanjiWatsuki/Kunoichi-DPO-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Kunoichi-DPO-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Kunoichi-DPO-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Kunoichi-DPO-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Kunoichi-DPO-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Kunoichi-DPO-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Kunoichi-DPO-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Kunoichi-DPO-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Kunoichi-DPO-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Kunoichi-DPO-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Kunoichi-DPO-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Kunoichi-DPO-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Kunoichi-DPO-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Kunoichi-DPO-7B-GGUF/blob/main/Kunoichi-DPO-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Kunoichi-DPO-7B-GGUF --include "Kunoichi-DPO-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Kunoichi-DPO-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/lamatama-GGUF | tensorblock | 2025-04-21T00:33:50Z | 112 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:kevin009/lamatama",
"base_model:quantized:kevin009/lamatama",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T10:01:53Z | ---
language:
- en
license: apache-2.0
base_model: kevin009/lamatama
tags:
- TensorBlock
- GGUF
model-index:
- name: lamatama
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 36.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.67
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kevin009/lamatama - GGUF
This repo contains GGUF format model files for [kevin009/lamatama](https://huggingface.co/kevin009/lamatama).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [lamatama-Q2_K.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q2_K.gguf) | Q2_K | 0.432 GB | smallest, significant quality loss - not recommended for most purposes |
| [lamatama-Q3_K_S.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q3_K_S.gguf) | Q3_K_S | 0.499 GB | very small, high quality loss |
| [lamatama-Q3_K_M.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q3_K_M.gguf) | Q3_K_M | 0.548 GB | very small, high quality loss |
| [lamatama-Q3_K_L.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [lamatama-Q4_0.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [lamatama-Q4_K_S.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q4_K_S.gguf) | Q4_K_S | 0.640 GB | small, greater quality loss |
| [lamatama-Q4_K_M.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q4_K_M.gguf) | Q4_K_M | 0.668 GB | medium, balanced quality - recommended |
| [lamatama-Q5_0.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q5_0.gguf) | Q5_0 | 0.766 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [lamatama-Q5_K_S.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q5_K_S.gguf) | Q5_K_S | 0.766 GB | large, low quality loss - recommended |
| [lamatama-Q5_K_M.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q5_K_M.gguf) | Q5_K_M | 0.782 GB | large, very low quality loss - recommended |
| [lamatama-Q6_K.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q6_K.gguf) | Q6_K | 0.903 GB | very large, extremely low quality loss |
| [lamatama-Q8_0.gguf](https://huggingface.co/tensorblock/lamatama-GGUF/blob/main/lamatama-Q8_0.gguf) | Q8_0 | 1.170 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/lamatama-GGUF --include "lamatama-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/lamatama-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF | tensorblock | 2025-04-21T00:33:49Z | 184 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:UCLA-AGI/SPIN_iter1",
"base_model:UCLA-AGI/zephyr-7b-sft-full-SPIN-iter1",
"base_model:quantized:UCLA-AGI/zephyr-7b-sft-full-SPIN-iter1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-21T09:36:16Z | ---
license: mit
datasets:
- UCLA-AGI/SPIN_iter1
language:
- en
base_model: UCLA-AGI/zephyr-7b-sft-full-SPIN-iter1
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## UCLA-AGI/zephyr-7b-sft-full-SPIN-iter1 - GGUF
This repo contains GGUF format model files for [UCLA-AGI/zephyr-7b-sft-full-SPIN-iter1](https://huggingface.co/UCLA-AGI/zephyr-7b-sft-full-SPIN-iter1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [zephyr-7b-sft-full-SPIN-iter1-Q2_K.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [zephyr-7b-sft-full-SPIN-iter1-Q3_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [zephyr-7b-sft-full-SPIN-iter1-Q3_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [zephyr-7b-sft-full-SPIN-iter1-Q3_K_L.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [zephyr-7b-sft-full-SPIN-iter1-Q4_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zephyr-7b-sft-full-SPIN-iter1-Q4_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [zephyr-7b-sft-full-SPIN-iter1-Q4_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [zephyr-7b-sft-full-SPIN-iter1-Q5_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zephyr-7b-sft-full-SPIN-iter1-Q5_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [zephyr-7b-sft-full-SPIN-iter1-Q5_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [zephyr-7b-sft-full-SPIN-iter1-Q6_K.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [zephyr-7b-sft-full-SPIN-iter1-Q8_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF --include "zephyr-7b-sft-full-SPIN-iter1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/zephyr-7b-sft-full-SPIN-iter1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CodeLlama-7b-Instruct-hf-GGUF | tensorblock | 2025-04-21T00:33:45Z | 134 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:hobbesleland/CodeLlama-7b-Instruct-hf",
"base_model:quantized:hobbesleland/CodeLlama-7b-Instruct-hf",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T08:47:46Z | ---
base_model: hobbesleland/CodeLlama-7b-Instruct-hf
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## hobbesleland/CodeLlama-7b-Instruct-hf - GGUF
This repo contains GGUF format model files for [hobbesleland/CodeLlama-7b-Instruct-hf](https://huggingface.co/hobbesleland/CodeLlama-7b-Instruct-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CodeLlama-7b-Instruct-hf-Q2_K.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [CodeLlama-7b-Instruct-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [CodeLlama-7b-Instruct-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [CodeLlama-7b-Instruct-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [CodeLlama-7b-Instruct-hf-Q4_0.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [CodeLlama-7b-Instruct-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [CodeLlama-7b-Instruct-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [CodeLlama-7b-Instruct-hf-Q5_0.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [CodeLlama-7b-Instruct-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [CodeLlama-7b-Instruct-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [CodeLlama-7b-Instruct-hf-Q6_K.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [CodeLlama-7b-Instruct-hf-Q8_0.gguf](https://huggingface.co/tensorblock/CodeLlama-7b-Instruct-hf-GGUF/blob/main/CodeLlama-7b-Instruct-hf-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CodeLlama-7b-Instruct-hf-GGUF --include "CodeLlama-7b-Instruct-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CodeLlama-7b-Instruct-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/AIRIC-The-Mistral-GGUF | tensorblock | 2025-04-21T00:33:43Z | 67 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:Open-Orca/OpenOrca",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:tatsu-lab/alpaca",
"dataset:garage-bAInd/Open-Platypus",
"base_model:ericpolewski/AIRIC-The-Mistral",
"base_model:quantized:ericpolewski/AIRIC-The-Mistral",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T08:38:45Z | ---
license: mit
datasets:
- Open-Orca/OpenOrca
- ise-uiuc/Magicoder-Evol-Instruct-110K
- tatsu-lab/alpaca
- garage-bAInd/Open-Platypus
base_model: ericpolewski/AIRIC-The-Mistral
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## ericpolewski/AIRIC-The-Mistral - GGUF
This repo contains GGUF format model files for [ericpolewski/AIRIC-The-Mistral](https://huggingface.co/ericpolewski/AIRIC-The-Mistral).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [AIRIC-The-Mistral-Q2_K.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [AIRIC-The-Mistral-Q3_K_S.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [AIRIC-The-Mistral-Q3_K_M.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [AIRIC-The-Mistral-Q3_K_L.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [AIRIC-The-Mistral-Q4_0.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [AIRIC-The-Mistral-Q4_K_S.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [AIRIC-The-Mistral-Q4_K_M.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [AIRIC-The-Mistral-Q5_0.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [AIRIC-The-Mistral-Q5_K_S.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [AIRIC-The-Mistral-Q5_K_M.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [AIRIC-The-Mistral-Q6_K.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [AIRIC-The-Mistral-Q8_0.gguf](https://huggingface.co/tensorblock/AIRIC-The-Mistral-GGUF/blob/main/AIRIC-The-Mistral-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/AIRIC-The-Mistral-GGUF --include "AIRIC-The-Mistral-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/AIRIC-The-Mistral-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF | tensorblock | 2025-04-21T00:33:41Z | 85 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:UCLA-AGI/SPIN_iter2",
"base_model:UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2",
"base_model:quantized:UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-21T08:11:27Z | ---
license: mit
datasets:
- UCLA-AGI/SPIN_iter2
language:
- en
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2 - GGUF
This repo contains GGUF format model files for [UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2](https://huggingface.co/UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [zephyr-7b-sft-full-SPIN-iter2-Q2_K.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [zephyr-7b-sft-full-SPIN-iter2-Q3_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [zephyr-7b-sft-full-SPIN-iter2-Q3_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [zephyr-7b-sft-full-SPIN-iter2-Q3_K_L.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [zephyr-7b-sft-full-SPIN-iter2-Q4_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zephyr-7b-sft-full-SPIN-iter2-Q4_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [zephyr-7b-sft-full-SPIN-iter2-Q4_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [zephyr-7b-sft-full-SPIN-iter2-Q5_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zephyr-7b-sft-full-SPIN-iter2-Q5_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [zephyr-7b-sft-full-SPIN-iter2-Q5_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [zephyr-7b-sft-full-SPIN-iter2-Q6_K.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [zephyr-7b-sft-full-SPIN-iter2-Q8_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF --include "zephyr-7b-sft-full-SPIN-iter2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/zephyr-7b-sft-full-SPIN-iter2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Beyonder-4x7b-GGUF | tensorblock | 2025-04-21T00:33:40Z | 50 | 0 | null | [
"gguf",
"moe",
"mergekit",
"TensorBlock",
"GGUF",
"base_model:mlabonne/Beyonder-4x7b",
"base_model:quantized:mlabonne/Beyonder-4x7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T07:39:05Z | ---
license: apache-2.0
tags:
- moe
- mergekit
- TensorBlock
- GGUF
base_model: mlabonne/Beyonder-4x7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mlabonne/Beyonder-4x7b - GGUF
This repo contains GGUF format model files for [mlabonne/Beyonder-4x7b](https://huggingface.co/mlabonne/Beyonder-4x7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>GPT4 Correct System: {system_prompt}<|end_of_turn|>GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Beyonder-4x7b-Q2_K.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q2_K.gguf) | Q2_K | 8.843 GB | smallest, significant quality loss - not recommended for most purposes |
| [Beyonder-4x7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q3_K_S.gguf) | Q3_K_S | 10.433 GB | very small, high quality loss |
| [Beyonder-4x7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q3_K_M.gguf) | Q3_K_M | 11.580 GB | very small, high quality loss |
| [Beyonder-4x7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q3_K_L.gguf) | Q3_K_L | 12.544 GB | small, substantial quality loss |
| [Beyonder-4x7b-Q4_0.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q4_0.gguf) | Q4_0 | 13.624 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Beyonder-4x7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q4_K_S.gguf) | Q4_K_S | 13.743 GB | small, greater quality loss |
| [Beyonder-4x7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q4_K_M.gguf) | Q4_K_M | 14.610 GB | medium, balanced quality - recommended |
| [Beyonder-4x7b-Q5_0.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q5_0.gguf) | Q5_0 | 16.626 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Beyonder-4x7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q5_K_S.gguf) | Q5_K_S | 16.626 GB | large, low quality loss - recommended |
| [Beyonder-4x7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q5_K_M.gguf) | Q5_K_M | 17.134 GB | large, very low quality loss - recommended |
| [Beyonder-4x7b-Q6_K.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q6_K.gguf) | Q6_K | 19.817 GB | very large, extremely low quality loss |
| [Beyonder-4x7b-Q8_0.gguf](https://huggingface.co/tensorblock/Beyonder-4x7b-GGUF/blob/main/Beyonder-4x7b-Q8_0.gguf) | Q8_0 | 25.666 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Beyonder-4x7b-GGUF --include "Beyonder-4x7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Beyonder-4x7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SuperChat-7B-GGUF | tensorblock | 2025-04-21T00:33:36Z | 26 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:yashmarathe/SuperChat-7B",
"base_model:quantized:yashmarathe/SuperChat-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T06:07:17Z | ---
license: apache-2.0
tags:
- merge
- TensorBlock
- GGUF
base_model: yashmarathe/SuperChat-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## yashmarathe/SuperChat-7B - GGUF
This repo contains GGUF format model files for [yashmarathe/SuperChat-7B](https://huggingface.co/yashmarathe/SuperChat-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SuperChat-7B-Q2_K.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [SuperChat-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [SuperChat-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [SuperChat-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [SuperChat-7B-Q4_0.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SuperChat-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [SuperChat-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [SuperChat-7B-Q5_0.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SuperChat-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [SuperChat-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [SuperChat-7B-Q6_K.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [SuperChat-7B-Q8_0.gguf](https://huggingface.co/tensorblock/SuperChat-7B-GGUF/blob/main/SuperChat-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SuperChat-7B-GGUF --include "SuperChat-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SuperChat-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TeenyTinyLlama-160m-GGUF | tensorblock | 2025-04-21T00:33:34Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"TensorBlock",
"GGUF",
"text-generation",
"pt",
"dataset:nicholasKluge/Pt-Corpus-Instruct",
"base_model:nicholasKluge/TeenyTinyLlama-160m",
"base_model:quantized:nicholasKluge/TeenyTinyLlama-160m",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-21T06:02:46Z | ---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- text-generation-inference
- TensorBlock
- GGUF
datasets:
- nicholasKluge/Pt-Corpus-Instruct
metrics:
- perplexity
pipeline_tag: text-generation
widget:
- text: 'A PUCRS Γ© uma universidade '
example_title: Exemplo
- text: A muitos anos atrΓ‘s, em uma galΓ‘xia muito distante, vivia uma raΓ§a de
example_title: Exemplo
- text: Em meio a um escΓ’ndalo, a frente parlamentar pediu ao Senador Silva para
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 20
top_p: 0.2
max_new_tokens: 150
co2_eq_emissions:
emissions: 5600
source: CodeCarbon
training_type: pre-training
geographical_location: Germany
hardware_used: NVIDIA A100-SXM4-40GB
base_model: nicholasKluge/TeenyTinyLlama-160m
model-index:
- name: TeenyTinyLlama-160m
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 19.24
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 23.09
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 22.37
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 53.97
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 0.24
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 43.97
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 36.92
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 42.63
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 11.39
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m
name: Open Portuguese LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## nicholasKluge/TeenyTinyLlama-160m - GGUF
This repo contains GGUF format model files for [nicholasKluge/TeenyTinyLlama-160m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TeenyTinyLlama-160m-Q2_K.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q2_K.gguf) | Q2_K | 0.071 GB | smallest, significant quality loss - not recommended for most purposes |
| [TeenyTinyLlama-160m-Q3_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q3_K_S.gguf) | Q3_K_S | 0.080 GB | very small, high quality loss |
| [TeenyTinyLlama-160m-Q3_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q3_K_M.gguf) | Q3_K_M | 0.086 GB | very small, high quality loss |
| [TeenyTinyLlama-160m-Q3_K_L.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q3_K_L.gguf) | Q3_K_L | 0.091 GB | small, substantial quality loss |
| [TeenyTinyLlama-160m-Q4_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q4_0.gguf) | Q4_0 | 0.099 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TeenyTinyLlama-160m-Q4_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q4_K_S.gguf) | Q4_K_S | 0.099 GB | small, greater quality loss |
| [TeenyTinyLlama-160m-Q4_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q4_K_M.gguf) | Q4_K_M | 0.103 GB | medium, balanced quality - recommended |
| [TeenyTinyLlama-160m-Q5_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q5_0.gguf) | Q5_0 | 0.116 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TeenyTinyLlama-160m-Q5_K_S.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q5_K_S.gguf) | Q5_K_S | 0.116 GB | large, low quality loss - recommended |
| [TeenyTinyLlama-160m-Q5_K_M.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q5_K_M.gguf) | Q5_K_M | 0.118 GB | large, very low quality loss - recommended |
| [TeenyTinyLlama-160m-Q6_K.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q6_K.gguf) | Q6_K | 0.134 GB | very large, extremely low quality loss |
| [TeenyTinyLlama-160m-Q8_0.gguf](https://huggingface.co/tensorblock/TeenyTinyLlama-160m-GGUF/blob/main/TeenyTinyLlama-160m-Q8_0.gguf) | Q8_0 | 0.173 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TeenyTinyLlama-160m-GGUF --include "TeenyTinyLlama-160m-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TeenyTinyLlama-160m-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/tinylamma-20000-GGUF | tensorblock | 2025-04-21T00:33:31Z | 35 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:shitshow123/tinylamma-20000",
"base_model:quantized:shitshow123/tinylamma-20000",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T05:41:11Z | ---
license: apache-2.0
base_model: shitshow123/tinylamma-20000
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## shitshow123/tinylamma-20000 - GGUF
This repo contains GGUF format model files for [shitshow123/tinylamma-20000](https://huggingface.co/shitshow123/tinylamma-20000).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [tinylamma-20000-Q2_K.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q2_K.gguf) | Q2_K | 0.432 GB | smallest, significant quality loss - not recommended for most purposes |
| [tinylamma-20000-Q3_K_S.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q3_K_S.gguf) | Q3_K_S | 0.499 GB | very small, high quality loss |
| [tinylamma-20000-Q3_K_M.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q3_K_M.gguf) | Q3_K_M | 0.548 GB | very small, high quality loss |
| [tinylamma-20000-Q3_K_L.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [tinylamma-20000-Q4_0.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tinylamma-20000-Q4_K_S.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q4_K_S.gguf) | Q4_K_S | 0.640 GB | small, greater quality loss |
| [tinylamma-20000-Q4_K_M.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q4_K_M.gguf) | Q4_K_M | 0.668 GB | medium, balanced quality - recommended |
| [tinylamma-20000-Q5_0.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q5_0.gguf) | Q5_0 | 0.766 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tinylamma-20000-Q5_K_S.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q5_K_S.gguf) | Q5_K_S | 0.766 GB | large, low quality loss - recommended |
| [tinylamma-20000-Q5_K_M.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q5_K_M.gguf) | Q5_K_M | 0.782 GB | large, very low quality loss - recommended |
| [tinylamma-20000-Q6_K.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q6_K.gguf) | Q6_K | 0.903 GB | very large, extremely low quality loss |
| [tinylamma-20000-Q8_0.gguf](https://huggingface.co/tensorblock/tinylamma-20000-GGUF/blob/main/tinylamma-20000-Q8_0.gguf) | Q8_0 | 1.170 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/tinylamma-20000-GGUF --include "tinylamma-20000-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/tinylamma-20000-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Sirius-10B-GGUF | tensorblock | 2025-04-21T00:33:30Z | 25 | 0 | null | [
"gguf",
"merge",
"leveldevai/TurdusBeagle-7B",
"FelixChao/Severus-7B",
"TensorBlock",
"GGUF",
"base_model:FelixChao/Sirius-10B",
"base_model:quantized:FelixChao/Sirius-10B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T04:33:59Z | ---
license: apache-2.0
tags:
- merge
- leveldevai/TurdusBeagle-7B
- FelixChao/Severus-7B
- TensorBlock
- GGUF
base_model: FelixChao/Sirius-10B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## FelixChao/Sirius-10B - GGUF
This repo contains GGUF format model files for [FelixChao/Sirius-10B](https://huggingface.co/FelixChao/Sirius-10B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Sirius-10B-Q2_K.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [Sirius-10B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [Sirius-10B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [Sirius-10B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [Sirius-10B-Q4_0.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Sirius-10B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [Sirius-10B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [Sirius-10B-Q5_0.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Sirius-10B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [Sirius-10B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [Sirius-10B-Q6_K.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [Sirius-10B-Q8_0.gguf](https://huggingface.co/tensorblock/Sirius-10B-GGUF/blob/main/Sirius-10B-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Sirius-10B-GGUF --include "Sirius-10B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Sirius-10B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF | tensorblock | 2025-04-21T00:33:28Z | 59 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:Intel/orca_dpo_pairs",
"base_model:HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v3",
"base_model:quantized:HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T04:11:57Z | ---
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
base_model: HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v3
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v3 - GGUF
This repo contains GGUF format model files for [HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v3](https://huggingface.co/HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q2_K.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q3_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q3_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q3_K_L.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q4_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q4_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q4_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q5_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q5_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q5_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q6_K.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [dolphin-2.6-mistral-7b-dpo-orca-v3-Q8_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-orca-v3-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF --include "dolphin-2.6-mistral-7b-dpo-orca-v3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/dolphin-2.6-mistral-7b-dpo-orca-v3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/agiin-11.1B-v0.0-GGUF | tensorblock | 2025-04-21T00:33:27Z | 27 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:mncai/agiin-11.1B-v0.0",
"base_model:quantized:mncai/agiin-11.1B-v0.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T03:15:21Z | ---
license: apache-2.0
language:
- en
base_model: mncai/agiin-11.1B-v0.0
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mncai/agiin-11.1B-v0.0 - GGUF
This repo contains GGUF format model files for [mncai/agiin-11.1B-v0.0](https://huggingface.co/mncai/agiin-11.1B-v0.0).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [agiin-11.1B-v0.0-Q2_K.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q2_K.gguf) | Q2_K | 4.164 GB | smallest, significant quality loss - not recommended for most purposes |
| [agiin-11.1B-v0.0-Q3_K_S.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q3_K_S.gguf) | Q3_K_S | 4.852 GB | very small, high quality loss |
| [agiin-11.1B-v0.0-Q3_K_M.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q3_K_M.gguf) | Q3_K_M | 5.404 GB | very small, high quality loss |
| [agiin-11.1B-v0.0-Q3_K_L.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q3_K_L.gguf) | Q3_K_L | 5.879 GB | small, substantial quality loss |
| [agiin-11.1B-v0.0-Q4_0.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q4_0.gguf) | Q4_0 | 6.318 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [agiin-11.1B-v0.0-Q4_K_S.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q4_K_S.gguf) | Q4_K_S | 6.364 GB | small, greater quality loss |
| [agiin-11.1B-v0.0-Q4_K_M.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q4_K_M.gguf) | Q4_K_M | 6.723 GB | medium, balanced quality - recommended |
| [agiin-11.1B-v0.0-Q5_0.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q5_0.gguf) | Q5_0 | 7.697 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [agiin-11.1B-v0.0-Q5_K_S.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q5_K_S.gguf) | Q5_K_S | 7.697 GB | large, low quality loss - recommended |
| [agiin-11.1B-v0.0-Q5_K_M.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q5_K_M.gguf) | Q5_K_M | 7.906 GB | large, very low quality loss - recommended |
| [agiin-11.1B-v0.0-Q6_K.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q6_K.gguf) | Q6_K | 9.163 GB | very large, extremely low quality loss |
| [agiin-11.1B-v0.0-Q8_0.gguf](https://huggingface.co/tensorblock/agiin-11.1B-v0.0-GGUF/blob/main/agiin-11.1B-v0.0-Q8_0.gguf) | Q8_0 | 11.868 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/agiin-11.1B-v0.0-GGUF --include "agiin-11.1B-v0.0-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/agiin-11.1B-v0.0-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/zephyr-220m-sft-full-GGUF | tensorblock | 2025-04-21T00:33:25Z | 11 | 0 | null | [
"gguf",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:BEE-spoke-data/zephyr-220m-sft-full",
"base_model:quantized:BEE-spoke-data/zephyr-220m-sft-full",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T03:05:23Z | ---
license: apache-2.0
base_model: BEE-spoke-data/zephyr-220m-sft-full
tags:
- generated_from_trainer
- TensorBlock
- GGUF
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: zephyr-220m-sft-full
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## BEE-spoke-data/zephyr-220m-sft-full - GGUF
This repo contains GGUF format model files for [BEE-spoke-data/zephyr-220m-sft-full](https://huggingface.co/BEE-spoke-data/zephyr-220m-sft-full).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [zephyr-220m-sft-full-Q2_K.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q2_K.gguf) | Q2_K | 0.094 GB | smallest, significant quality loss - not recommended for most purposes |
| [zephyr-220m-sft-full-Q3_K_S.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q3_K_S.gguf) | Q3_K_S | 0.107 GB | very small, high quality loss |
| [zephyr-220m-sft-full-Q3_K_M.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q3_K_M.gguf) | Q3_K_M | 0.115 GB | very small, high quality loss |
| [zephyr-220m-sft-full-Q3_K_L.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q3_K_L.gguf) | Q3_K_L | 0.121 GB | small, substantial quality loss |
| [zephyr-220m-sft-full-Q4_0.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q4_0.gguf) | Q4_0 | 0.132 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zephyr-220m-sft-full-Q4_K_S.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q4_K_S.gguf) | Q4_K_S | 0.132 GB | small, greater quality loss |
| [zephyr-220m-sft-full-Q4_K_M.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q4_K_M.gguf) | Q4_K_M | 0.138 GB | medium, balanced quality - recommended |
| [zephyr-220m-sft-full-Q5_0.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q5_0.gguf) | Q5_0 | 0.155 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zephyr-220m-sft-full-Q5_K_S.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q5_K_S.gguf) | Q5_K_S | 0.155 GB | large, low quality loss - recommended |
| [zephyr-220m-sft-full-Q5_K_M.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q5_K_M.gguf) | Q5_K_M | 0.158 GB | large, very low quality loss - recommended |
| [zephyr-220m-sft-full-Q6_K.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q6_K.gguf) | Q6_K | 0.180 GB | very large, extremely low quality loss |
| [zephyr-220m-sft-full-Q8_0.gguf](https://huggingface.co/tensorblock/zephyr-220m-sft-full-GGUF/blob/main/zephyr-220m-sft-full-Q8_0.gguf) | Q8_0 | 0.232 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/zephyr-220m-sft-full-GGUF --include "zephyr-220m-sft-full-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/zephyr-220m-sft-full-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/hepu-o4zf-ravz-7-0-GGUF | tensorblock | 2025-04-21T00:33:24Z | 35 | 0 | null | [
"gguf",
"autotrain",
"text-generation",
"TensorBlock",
"GGUF",
"base_model:abhishek/hepu-o4zf-ravz-7-0",
"base_model:quantized:abhishek/hepu-o4zf-ravz-7-0",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-21T02:31:35Z | ---
tags:
- autotrain
- text-generation
- TensorBlock
- GGUF
widget:
- text: 'I love AutoTrain because '
license: other
base_model: abhishek/hepu-o4zf-ravz-7-0
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## abhishek/hepu-o4zf-ravz-7-0 - GGUF
This repo contains GGUF format model files for [abhishek/hepu-o4zf-ravz-7-0](https://huggingface.co/abhishek/hepu-o4zf-ravz-7-0).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [hepu-o4zf-ravz-7-0-Q2_K.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [hepu-o4zf-ravz-7-0-Q3_K_S.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [hepu-o4zf-ravz-7-0-Q3_K_M.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [hepu-o4zf-ravz-7-0-Q3_K_L.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [hepu-o4zf-ravz-7-0-Q4_0.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [hepu-o4zf-ravz-7-0-Q4_K_S.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [hepu-o4zf-ravz-7-0-Q4_K_M.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [hepu-o4zf-ravz-7-0-Q5_0.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [hepu-o4zf-ravz-7-0-Q5_K_S.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [hepu-o4zf-ravz-7-0-Q5_K_M.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [hepu-o4zf-ravz-7-0-Q6_K.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [hepu-o4zf-ravz-7-0-Q8_0.gguf](https://huggingface.co/tensorblock/hepu-o4zf-ravz-7-0-GGUF/blob/main/hepu-o4zf-ravz-7-0-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/hepu-o4zf-ravz-7-0-GGUF --include "hepu-o4zf-ravz-7-0-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/hepu-o4zf-ravz-7-0-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mixtral-8x7B-v0.1-GGUF | tensorblock | 2025-04-21T00:33:22Z | 60 | 0 | null | [
"gguf",
"moe",
"TensorBlock",
"GGUF",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:quantized:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-21T01:59:01Z | ---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
tags:
- moe
- TensorBlock
- GGUF
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/fr/terms/">Privacy Policy</a>.
base_model: mistralai/Mixtral-8x7B-v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mistralai/Mixtral-8x7B-v0.1 - GGUF
This repo contains GGUF format model files for [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mixtral-8x7B-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mixtral-8x7B-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [Mixtral-8x7B-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [Mixtral-8x7B-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [Mixtral-8x7B-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mixtral-8x7B-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [Mixtral-8x7B-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [Mixtral-8x7B-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mixtral-8x7B-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [Mixtral-8x7B-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [Mixtral-8x7B-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [Mixtral-8x7B-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Mixtral-8x7B-v0.1-GGUF/blob/main/Mixtral-8x7B-v0.1-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mixtral-8x7B-v0.1-GGUF --include "Mixtral-8x7B-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mixtral-8x7B-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Barcenas-10.7b-GGUF | tensorblock | 2025-04-21T00:33:18Z | 37 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"es",
"base_model:Danielbrdz/Barcenas-10.7b",
"base_model:quantized:Danielbrdz/Barcenas-10.7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T00:46:12Z | ---
license: apache-2.0
language:
- en
- es
tags:
- TensorBlock
- GGUF
base_model: Danielbrdz/Barcenas-10.7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Danielbrdz/Barcenas-10.7b - GGUF
This repo contains GGUF format model files for [Danielbrdz/Barcenas-10.7b](https://huggingface.co/Danielbrdz/Barcenas-10.7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Barcenas-10.7b-Q2_K.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [Barcenas-10.7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [Barcenas-10.7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [Barcenas-10.7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [Barcenas-10.7b-Q4_0.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Barcenas-10.7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [Barcenas-10.7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [Barcenas-10.7b-Q5_0.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Barcenas-10.7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [Barcenas-10.7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [Barcenas-10.7b-Q6_K.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [Barcenas-10.7b-Q8_0.gguf](https://huggingface.co/tensorblock/Barcenas-10.7b-GGUF/blob/main/Barcenas-10.7b-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Barcenas-10.7b-GGUF --include "Barcenas-10.7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Barcenas-10.7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/StopCarbon-10.7B-v2-GGUF | tensorblock | 2025-04-21T00:33:08Z | 36 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"en",
"base_model:kekmodel/StopCarbon-10.7B-v2",
"base_model:quantized:kekmodel/StopCarbon-10.7B-v2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T00:04:07Z | ---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
- TensorBlock
- GGUF
base_model: kekmodel/StopCarbon-10.7B-v2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kekmodel/StopCarbon-10.7B-v2 - GGUF
This repo contains GGUF format model files for [kekmodel/StopCarbon-10.7B-v2](https://huggingface.co/kekmodel/StopCarbon-10.7B-v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [StopCarbon-10.7B-v2-Q2_K.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [StopCarbon-10.7B-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [StopCarbon-10.7B-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [StopCarbon-10.7B-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [StopCarbon-10.7B-v2-Q4_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [StopCarbon-10.7B-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [StopCarbon-10.7B-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [StopCarbon-10.7B-v2-Q5_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [StopCarbon-10.7B-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [StopCarbon-10.7B-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [StopCarbon-10.7B-v2-Q6_K.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [StopCarbon-10.7B-v2-Q8_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v2-GGUF/blob/main/StopCarbon-10.7B-v2-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/StopCarbon-10.7B-v2-GGUF --include "StopCarbon-10.7B-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/StopCarbon-10.7B-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mhm-7b-v1.3-GGUF | tensorblock | 2025-04-21T00:33:04Z | 29 | 0 | null | [
"gguf",
"moe",
"merge",
"TensorBlock",
"GGUF",
"base_model:h2m/mhm-7b-v1.3",
"base_model:quantized:h2m/mhm-7b-v1.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-20T23:25:24Z | ---
tags:
- moe
- merge
- TensorBlock
- GGUF
license: apache-2.0
base_model: h2m/mhm-7b-v1.3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## h2m/mhm-7b-v1.3 - GGUF
This repo contains GGUF format model files for [h2m/mhm-7b-v1.3](https://huggingface.co/h2m/mhm-7b-v1.3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mhm-7b-v1.3-Q2_K.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [mhm-7b-v1.3-Q3_K_S.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [mhm-7b-v1.3-Q3_K_M.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [mhm-7b-v1.3-Q3_K_L.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [mhm-7b-v1.3-Q4_0.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mhm-7b-v1.3-Q4_K_S.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [mhm-7b-v1.3-Q4_K_M.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [mhm-7b-v1.3-Q5_0.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mhm-7b-v1.3-Q5_K_S.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [mhm-7b-v1.3-Q5_K_M.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [mhm-7b-v1.3-Q6_K.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [mhm-7b-v1.3-Q8_0.gguf](https://huggingface.co/tensorblock/mhm-7b-v1.3-GGUF/blob/main/mhm-7b-v1.3-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mhm-7b-v1.3-GGUF --include "mhm-7b-v1.3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mhm-7b-v1.3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
amstf/ctmri_adallama | amstf | 2025-04-21T00:33:02Z | 0 | 0 | null | [
"safetensors",
"mllama",
"region:us"
] | null | 2025-04-20T22:49:18Z | ## UNNC FYP
<p>This model belongs to my 2025 UNNC FYP project and my student ID is 20412245 |
tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF | tensorblock | 2025-04-21T00:32:59Z | 27 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct",
"base_model:quantized:Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-20T23:20:49Z | ---
license: cc-by-nc-4.0
tags:
- merge
- TensorBlock
- GGUF
base_model: Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
model-index:
- name: SauerkrautLM-UNA-SOLAR-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.8
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct - GGUF
This repo contains GGUF format model files for [Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [SauerkrautLM-UNA-SOLAR-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/SauerkrautLM-UNA-SOLAR-Instruct-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF --include "SauerkrautLM-UNA-SOLAR-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SauerkrautLM-UNA-SOLAR-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/NeuralHermes-MoE-2x7B-GGUF | tensorblock | 2025-04-21T00:32:53Z | 14 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"en",
"base_model:ibndias/NeuralHermes-MoE-2x7B",
"base_model:quantized:ibndias/NeuralHermes-MoE-2x7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T21:52:43Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- TensorBlock
- GGUF
base_model: ibndias/NeuralHermes-MoE-2x7B
model-index:
- name: NeuralHermes-MoE-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/NeuralHermes-MoE-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/NeuralHermes-MoE-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/NeuralHermes-MoE-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.61
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/NeuralHermes-MoE-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/NeuralHermes-MoE-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.86
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/NeuralHermes-MoE-2x7B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## ibndias/NeuralHermes-MoE-2x7B - GGUF
This repo contains GGUF format model files for [ibndias/NeuralHermes-MoE-2x7B](https://huggingface.co/ibndias/NeuralHermes-MoE-2x7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NeuralHermes-MoE-2x7B-Q2_K.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q2_K.gguf) | Q2_K | 4.761 GB | smallest, significant quality loss - not recommended for most purposes |
| [NeuralHermes-MoE-2x7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q3_K_S.gguf) | Q3_K_S | 5.588 GB | very small, high quality loss |
| [NeuralHermes-MoE-2x7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q3_K_M.gguf) | Q3_K_M | 6.206 GB | very small, high quality loss |
| [NeuralHermes-MoE-2x7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q3_K_L.gguf) | Q3_K_L | 6.730 GB | small, substantial quality loss |
| [NeuralHermes-MoE-2x7B-Q4_0.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q4_0.gguf) | Q4_0 | 7.281 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NeuralHermes-MoE-2x7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q4_K_S.gguf) | Q4_K_S | 7.342 GB | small, greater quality loss |
| [NeuralHermes-MoE-2x7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q4_K_M.gguf) | Q4_K_M | 7.783 GB | medium, balanced quality - recommended |
| [NeuralHermes-MoE-2x7B-Q5_0.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q5_0.gguf) | Q5_0 | 8.874 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NeuralHermes-MoE-2x7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q5_K_S.gguf) | Q5_K_S | 8.874 GB | large, low quality loss - recommended |
| [NeuralHermes-MoE-2x7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q5_K_M.gguf) | Q5_K_M | 9.133 GB | large, very low quality loss - recommended |
| [NeuralHermes-MoE-2x7B-Q6_K.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q6_K.gguf) | Q6_K | 10.567 GB | very large, extremely low quality loss |
| [NeuralHermes-MoE-2x7B-Q8_0.gguf](https://huggingface.co/tensorblock/NeuralHermes-MoE-2x7B-GGUF/blob/main/NeuralHermes-MoE-2x7B-Q8_0.gguf) | Q8_0 | 13.686 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/NeuralHermes-MoE-2x7B-GGUF --include "NeuralHermes-MoE-2x7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/NeuralHermes-MoE-2x7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Bald-Eagle-7B-GGUF | tensorblock | 2025-04-21T00:32:52Z | 57 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:cookinai/Bald-Eagle-7B",
"base_model:quantized:cookinai/Bald-Eagle-7B",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T21:07:29Z | ---
license: cc-by-nc-nd-4.0
tags:
- TensorBlock
- GGUF
base_model: cookinai/Bald-Eagle-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cookinai/Bald-Eagle-7B - GGUF
This repo contains GGUF format model files for [cookinai/Bald-Eagle-7B](https://huggingface.co/cookinai/Bald-Eagle-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Bald-Eagle-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Bald-Eagle-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Bald-Eagle-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Bald-Eagle-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Bald-Eagle-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Bald-Eagle-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Bald-Eagle-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Bald-Eagle-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Bald-Eagle-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Bald-Eagle-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Bald-Eagle-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Bald-Eagle-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Bald-Eagle-7B-GGUF/blob/main/Bald-Eagle-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Bald-Eagle-7B-GGUF --include "Bald-Eagle-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Bald-Eagle-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ko-wand-136M-GGUF | tensorblock | 2025-04-21T00:32:48Z | 23 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"en",
"base_model:instructkr/ko-wand-136M",
"base_model:quantized:instructkr/ko-wand-136M",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-20T19:27:41Z | ---
license:
- apache-2.0
language:
- ko
- en
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: instructkr/ko-wand-136M
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## instructkr/ko-wand-136M - GGUF
This repo contains GGUF format model files for [instructkr/ko-wand-136M](https://huggingface.co/instructkr/ko-wand-136M).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ko-wand-136M-Q2_K.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q2_K.gguf) | Q2_K | 0.061 GB | smallest, significant quality loss - not recommended for most purposes |
| [ko-wand-136M-Q3_K_S.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q3_K_S.gguf) | Q3_K_S | 0.069 GB | very small, high quality loss |
| [ko-wand-136M-Q3_K_M.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q3_K_M.gguf) | Q3_K_M | 0.073 GB | very small, high quality loss |
| [ko-wand-136M-Q3_K_L.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q3_K_L.gguf) | Q3_K_L | 0.077 GB | small, substantial quality loss |
| [ko-wand-136M-Q4_0.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q4_0.gguf) | Q4_0 | 0.084 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ko-wand-136M-Q4_K_S.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q4_K_S.gguf) | Q4_K_S | 0.084 GB | small, greater quality loss |
| [ko-wand-136M-Q4_K_M.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q4_K_M.gguf) | Q4_K_M | 0.087 GB | medium, balanced quality - recommended |
| [ko-wand-136M-Q5_0.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q5_0.gguf) | Q5_0 | 0.098 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ko-wand-136M-Q5_K_S.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q5_K_S.gguf) | Q5_K_S | 0.098 GB | large, low quality loss - recommended |
| [ko-wand-136M-Q5_K_M.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q5_K_M.gguf) | Q5_K_M | 0.100 GB | large, very low quality loss - recommended |
| [ko-wand-136M-Q6_K.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q6_K.gguf) | Q6_K | 0.113 GB | very large, extremely low quality loss |
| [ko-wand-136M-Q8_0.gguf](https://huggingface.co/tensorblock/ko-wand-136M-GGUF/blob/main/ko-wand-136M-Q8_0.gguf) | Q8_0 | 0.146 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ko-wand-136M-GGUF --include "ko-wand-136M-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ko-wand-136M-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MBX-7B-v3-GGUF | tensorblock | 2025-04-21T00:32:47Z | 20 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MBX-7B",
"flemmingmiguel/MBX-7B-v3",
"TensorBlock",
"GGUF",
"base_model:flemmingmiguel/MBX-7B-v3",
"base_model:quantized:flemmingmiguel/MBX-7B-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T19:24:37Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MBX-7B
- flemmingmiguel/MBX-7B-v3
- TensorBlock
- GGUF
base_model: flemmingmiguel/MBX-7B-v3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## flemmingmiguel/MBX-7B-v3 - GGUF
This repo contains GGUF format model files for [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MBX-7B-v3-Q2_K.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [MBX-7B-v3-Q3_K_S.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [MBX-7B-v3-Q3_K_M.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [MBX-7B-v3-Q3_K_L.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [MBX-7B-v3-Q4_0.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MBX-7B-v3-Q4_K_S.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [MBX-7B-v3-Q4_K_M.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [MBX-7B-v3-Q5_0.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MBX-7B-v3-Q5_K_S.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [MBX-7B-v3-Q5_K_M.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [MBX-7B-v3-Q6_K.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [MBX-7B-v3-Q8_0.gguf](https://huggingface.co/tensorblock/MBX-7B-v3-GGUF/blob/main/MBX-7B-v3-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MBX-7B-v3-GGUF --include "MBX-7B-v3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MBX-7B-v3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mistral-Passthrough-8L-10B-GGUF | tensorblock | 2025-04-21T00:32:44Z | 71 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"TensorBlock",
"GGUF",
"base_model:DeepKarkhanis/Mistral-Passthrough-8L-10B",
"base_model:quantized:DeepKarkhanis/Mistral-Passthrough-8L-10B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T18:50:53Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- TensorBlock
- GGUF
base_model: DeepKarkhanis/Mistral-Passthrough-8L-10B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## DeepKarkhanis/Mistral-Passthrough-8L-10B - GGUF
This repo contains GGUF format model files for [DeepKarkhanis/Mistral-Passthrough-8L-10B](https://huggingface.co/DeepKarkhanis/Mistral-Passthrough-8L-10B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-Passthrough-8L-10B-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-Passthrough-8L-10B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-Passthrough-8L-10B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-Passthrough-8L-10B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-Passthrough-8L-10B-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-Passthrough-8L-10B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-Passthrough-8L-10B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-Passthrough-8L-10B-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-Passthrough-8L-10B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-Passthrough-8L-10B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-Passthrough-8L-10B-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-Passthrough-8L-10B-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-Passthrough-8L-10B-GGUF/blob/main/Mistral-Passthrough-8L-10B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-Passthrough-8L-10B-GGUF --include "Mistral-Passthrough-8L-10B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-Passthrough-8L-10B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/NeuralPizza-7B-V0.2-GGUF | tensorblock | 2025-04-21T00:32:42Z | 35 | 0 | Transformers | [
"Transformers",
"gguf",
"transformers",
"fine-tuned",
"language-modeling",
"direct-preference-optimization",
"TensorBlock",
"GGUF",
"dataset:Intel/orca_dpo_pairs",
"base_model:RatanRohith/NeuralPizza-7B-V0.2",
"base_model:quantized:RatanRohith/NeuralPizza-7B-V0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T18:23:55Z | ---
library_name: Transformers
tags:
- transformers
- fine-tuned
- language-modeling
- direct-preference-optimization
- TensorBlock
- GGUF
datasets:
- Intel/orca_dpo_pairs
license: apache-2.0
base_model: RatanRohith/NeuralPizza-7B-V0.2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## RatanRohith/NeuralPizza-7B-V0.2 - GGUF
This repo contains GGUF format model files for [RatanRohith/NeuralPizza-7B-V0.2](https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NeuralPizza-7B-V0.2-Q2_K.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [NeuralPizza-7B-V0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [NeuralPizza-7B-V0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [NeuralPizza-7B-V0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [NeuralPizza-7B-V0.2-Q4_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NeuralPizza-7B-V0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [NeuralPizza-7B-V0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [NeuralPizza-7B-V0.2-Q5_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NeuralPizza-7B-V0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [NeuralPizza-7B-V0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [NeuralPizza-7B-V0.2-Q6_K.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [NeuralPizza-7B-V0.2-Q8_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/NeuralPizza-7B-V0.2-GGUF --include "NeuralPizza-7B-V0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/NeuralPizza-7B-V0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ScaleDown-7B-slerp-v0.1-GGUF | tensorblock | 2025-04-21T00:32:41Z | 36 | 0 | null | [
"gguf",
"merge",
"mergekit",
"TensorBlock",
"GGUF",
"base_model:scaledown/ScaleDown-7B-slerp-v0.1",
"base_model:quantized:scaledown/ScaleDown-7B-slerp-v0.1",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T17:56:00Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- TensorBlock
- GGUF
base_model: scaledown/ScaleDown-7B-slerp-v0.1
model-index:
- name: ScaleDown-7B-slerp-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.9
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## scaledown/ScaleDown-7B-slerp-v0.1 - GGUF
This repo contains GGUF format model files for [scaledown/ScaleDown-7B-slerp-v0.1](https://huggingface.co/scaledown/ScaleDown-7B-slerp-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ScaleDown-7B-slerp-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [ScaleDown-7B-slerp-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [ScaleDown-7B-slerp-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [ScaleDown-7B-slerp-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [ScaleDown-7B-slerp-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ScaleDown-7B-slerp-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [ScaleDown-7B-slerp-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [ScaleDown-7B-slerp-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ScaleDown-7B-slerp-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [ScaleDown-7B-slerp-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [ScaleDown-7B-slerp-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [ScaleDown-7B-slerp-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/ScaleDown-7B-slerp-v0.1-GGUF/blob/main/ScaleDown-7B-slerp-v0.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ScaleDown-7B-slerp-v0.1-GGUF --include "ScaleDown-7B-slerp-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ScaleDown-7B-slerp-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CarbonVillain-en-10.7B-v5-GGUF | tensorblock | 2025-04-21T00:32:40Z | 48 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:jeonsworld/CarbonVillain-en-10.7B-v5",
"base_model:quantized:jeonsworld/CarbonVillain-en-10.7B-v5",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-20T17:26:26Z | ---
license: cc-by-nc-sa-4.0
language:
- en
base_model: jeonsworld/CarbonVillain-en-10.7B-v5
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jeonsworld/CarbonVillain-en-10.7B-v5 - GGUF
This repo contains GGUF format model files for [jeonsworld/CarbonVillain-en-10.7B-v5](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v5).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CarbonVillain-en-10.7B-v5-Q2_K.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [CarbonVillain-en-10.7B-v5-Q3_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [CarbonVillain-en-10.7B-v5-Q3_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [CarbonVillain-en-10.7B-v5-Q3_K_L.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [CarbonVillain-en-10.7B-v5-Q4_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [CarbonVillain-en-10.7B-v5-Q4_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [CarbonVillain-en-10.7B-v5-Q4_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [CarbonVillain-en-10.7B-v5-Q5_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [CarbonVillain-en-10.7B-v5-Q5_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [CarbonVillain-en-10.7B-v5-Q5_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [CarbonVillain-en-10.7B-v5-Q6_K.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [CarbonVillain-en-10.7B-v5-Q8_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v5-GGUF/blob/main/CarbonVillain-en-10.7B-v5-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CarbonVillain-en-10.7B-v5-GGUF --include "CarbonVillain-en-10.7B-v5-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CarbonVillain-en-10.7B-v5-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/LongAlign-7B-64k-base-GGUF | tensorblock | 2025-04-21T00:32:39Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"Long Context",
"llama",
"TensorBlock",
"GGUF",
"en",
"zh",
"base_model:THUDM/LongAlign-7B-64k-base",
"base_model:quantized:THUDM/LongAlign-7B-64k-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T17:17:39Z | ---
language:
- en
- zh
library_name: transformers
tags:
- Long Context
- llama
- TensorBlock
- GGUF
license: apache-2.0
base_model: THUDM/LongAlign-7B-64k-base
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## THUDM/LongAlign-7B-64k-base - GGUF
This repo contains GGUF format model files for [THUDM/LongAlign-7B-64k-base](https://huggingface.co/THUDM/LongAlign-7B-64k-base).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [LongAlign-7B-64k-base-Q2_K.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q2_K.gguf) | Q2_K | 2.534 GB | smallest, significant quality loss - not recommended for most purposes |
| [LongAlign-7B-64k-base-Q3_K_S.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q3_K_S.gguf) | Q3_K_S | 2.950 GB | very small, high quality loss |
| [LongAlign-7B-64k-base-Q3_K_M.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q3_K_M.gguf) | Q3_K_M | 3.299 GB | very small, high quality loss |
| [LongAlign-7B-64k-base-Q3_K_L.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q3_K_L.gguf) | Q3_K_L | 3.598 GB | small, substantial quality loss |
| [LongAlign-7B-64k-base-Q4_0.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [LongAlign-7B-64k-base-Q4_K_S.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q4_K_S.gguf) | Q4_K_S | 3.858 GB | small, greater quality loss |
| [LongAlign-7B-64k-base-Q4_K_M.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q4_K_M.gguf) | Q4_K_M | 4.082 GB | medium, balanced quality - recommended |
| [LongAlign-7B-64k-base-Q5_0.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q5_0.gguf) | Q5_0 | 4.653 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [LongAlign-7B-64k-base-Q5_K_S.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q5_K_S.gguf) | Q5_K_S | 4.653 GB | large, low quality loss - recommended |
| [LongAlign-7B-64k-base-Q5_K_M.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q5_K_M.gguf) | Q5_K_M | 4.785 GB | large, very low quality loss - recommended |
| [LongAlign-7B-64k-base-Q6_K.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q6_K.gguf) | Q6_K | 5.531 GB | very large, extremely low quality loss |
| [LongAlign-7B-64k-base-Q8_0.gguf](https://huggingface.co/tensorblock/LongAlign-7B-64k-base-GGUF/blob/main/LongAlign-7B-64k-base-Q8_0.gguf) | Q8_0 | 7.163 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/LongAlign-7B-64k-base-GGUF --include "LongAlign-7B-64k-base-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/LongAlign-7B-64k-base-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF | tensorblock | 2025-04-21T00:32:30Z | 26 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34",
"base_model:quantized:kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-20T14:47:53Z | ---
language:
- en
pipeline_tag: text-generation
license: cc-by-nc-4.0
tags:
- TensorBlock
- GGUF
base_model: kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34 - GGUF
This repo contains GGUF format model files for [kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34](https://huggingface.co/kimwooglae/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q2_K.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q2_K.gguf) | Q2_K | 4.079 GB | smallest, significant quality loss - not recommended for most purposes |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q3_K_S.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q3_K_S.gguf) | Q3_K_S | 4.747 GB | very small, high quality loss |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q3_K_M.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q3_K_M.gguf) | Q3_K_M | 5.278 GB | very small, high quality loss |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q3_K_L.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q3_K_L.gguf) | Q3_K_L | 5.733 GB | small, substantial quality loss |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q4_0.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q4_0.gguf) | Q4_0 | 6.163 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q4_K_S.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q4_K_S.gguf) | Q4_K_S | 6.210 GB | small, greater quality loss |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q4_K_M.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q4_K_M.gguf) | Q4_K_M | 6.553 GB | medium, balanced quality - recommended |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q5_0.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q5_0.gguf) | Q5_0 | 7.497 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q5_K_S.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q5_K_S.gguf) | Q5_K_S | 7.497 GB | large, low quality loss - recommended |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q5_K_M.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q5_K_M.gguf) | Q5_K_M | 7.697 GB | large, very low quality loss - recommended |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q6_K.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q6_K.gguf) | Q6_K | 8.913 GB | very large, extremely low quality loss |
| [WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q8_0.gguf](https://huggingface.co/tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF/blob/main/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q8_0.gguf) | Q8_0 | 11.544 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF --include "WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/WebSquareAI-Instruct-KoSOLAR-10.7b-v0.5.34-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/FusionNet_linear-GGUF | tensorblock | 2025-04-21T00:32:28Z | 26 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:TomGrc/FusionNet_linear",
"base_model:quantized:TomGrc/FusionNet_linear",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-20T14:42:40Z | ---
language:
- en
license: mit
tags:
- merge
- TensorBlock
- GGUF
pipeline_tag: text-generation
base_model: TomGrc/FusionNet_linear
model-index:
- name: FusionNet_linear
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_linear
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_linear
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_linear
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.94
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_linear
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_linear
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_linear
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## TomGrc/FusionNet_linear - GGUF
This repo contains GGUF format model files for [TomGrc/FusionNet_linear](https://huggingface.co/TomGrc/FusionNet_linear).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [FusionNet_linear-Q2_K.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [FusionNet_linear-Q3_K_S.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [FusionNet_linear-Q3_K_M.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [FusionNet_linear-Q3_K_L.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [FusionNet_linear-Q4_0.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [FusionNet_linear-Q4_K_S.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [FusionNet_linear-Q4_K_M.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [FusionNet_linear-Q5_0.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [FusionNet_linear-Q5_K_S.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [FusionNet_linear-Q5_K_M.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [FusionNet_linear-Q6_K.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [FusionNet_linear-Q8_0.gguf](https://huggingface.co/tensorblock/FusionNet_linear-GGUF/blob/main/FusionNet_linear-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/FusionNet_linear-GGUF --include "FusionNet_linear-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/FusionNet_linear-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF | tensorblock | 2025-04-21T00:32:26Z | 39 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1",
"base_model:quantized:pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T13:10:33Z | ---
license: cc-by-nc-4.0
base_model: pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1 - GGUF
This repo contains GGUF format model files for [pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1](https://huggingface.co/pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF/blob/main/SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF --include "SOLAR-10.7B-dpo-instruct-tuned-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SOLAR-10.7B-dpo-instruct-tuned-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF | tensorblock | 2025-04-21T00:32:19Z | 36 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:ibndias/Nous-Hermes-2-MoE-2x34B",
"base_model:quantized:ibndias/Nous-Hermes-2-MoE-2x34B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T08:55:35Z | ---
license: apache-2.0
base_model: ibndias/Nous-Hermes-2-MoE-2x34B
tags:
- TensorBlock
- GGUF
model-index:
- name: Nous-Hermes-2-MoE-2x34B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.73
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## ibndias/Nous-Hermes-2-MoE-2x34B - GGUF
This repo contains GGUF format model files for [ibndias/Nous-Hermes-2-MoE-2x34B](https://huggingface.co/ibndias/Nous-Hermes-2-MoE-2x34B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Nous-Hermes-2-MoE-2x34B-Q2_K.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q2_K.gguf) | Q2_K | 22.394 GB | smallest, significant quality loss - not recommended for most purposes |
| [Nous-Hermes-2-MoE-2x34B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q3_K_S.gguf) | Q3_K_S | 26.318 GB | very small, high quality loss |
| [Nous-Hermes-2-MoE-2x34B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q3_K_M.gguf) | Q3_K_M | 29.237 GB | very small, high quality loss |
| [Nous-Hermes-2-MoE-2x34B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q3_K_L.gguf) | Q3_K_L | 31.768 GB | small, substantial quality loss |
| [Nous-Hermes-2-MoE-2x34B-Q4_0.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q4_0.gguf) | Q4_0 | 34.334 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Nous-Hermes-2-MoE-2x34B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q4_K_S.gguf) | Q4_K_S | 34.594 GB | small, greater quality loss |
| [Nous-Hermes-2-MoE-2x34B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q4_K_M.gguf) | Q4_K_M | 36.661 GB | medium, balanced quality - recommended |
| [Nous-Hermes-2-MoE-2x34B-Q5_0.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q5_0.gguf) | Q5_0 | 41.878 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Nous-Hermes-2-MoE-2x34B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q5_K_S.gguf) | Q5_K_S | 41.878 GB | large, low quality loss - recommended |
| [Nous-Hermes-2-MoE-2x34B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q5_K_M.gguf) | Q5_K_M | 43.077 GB | large, very low quality loss - recommended |
| [Nous-Hermes-2-MoE-2x34B-Q6_K.gguf](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q6_K.gguf) | Q6_K | 49.893 GB | very large, extremely low quality loss |
| [Nous-Hermes-2-MoE-2x34B-Q8_0](https://huggingface.co/tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF/blob/main/Nous-Hermes-2-MoE-2x34B-Q8_0) | Q8_0 | 23.974 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF --include "Nous-Hermes-2-MoE-2x34B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Nous-Hermes-2-MoE-2x34B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Med_GPT2-GGUF | tensorblock | 2025-04-21T00:32:17Z | 126 | 0 | null | [
"gguf",
"medical",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:gamino/wiki_medical_terms",
"base_model:Sharathhebbar24/Med_GPT2",
"base_model:quantized:Sharathhebbar24/Med_GPT2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-20T08:17:52Z | ---
license: apache-2.0
datasets:
- gamino/wiki_medical_terms
language:
- en
pipeline_tag: text-generation
tags:
- medical
- TensorBlock
- GGUF
base_model: Sharathhebbar24/Med_GPT2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Sharathhebbar24/Med_GPT2 - GGUF
This repo contains GGUF format model files for [Sharathhebbar24/Med_GPT2](https://huggingface.co/Sharathhebbar24/Med_GPT2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Med_GPT2-Q2_K.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q2_K.gguf) | Q2_K | 0.081 GB | smallest, significant quality loss - not recommended for most purposes |
| [Med_GPT2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q3_K_S.gguf) | Q3_K_S | 0.090 GB | very small, high quality loss |
| [Med_GPT2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q3_K_M.gguf) | Q3_K_M | 0.098 GB | very small, high quality loss |
| [Med_GPT2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q3_K_L.gguf) | Q3_K_L | 0.102 GB | small, substantial quality loss |
| [Med_GPT2-Q4_0.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q4_0.gguf) | Q4_0 | 0.107 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Med_GPT2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q4_K_S.gguf) | Q4_K_S | 0.107 GB | small, greater quality loss |
| [Med_GPT2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q4_K_M.gguf) | Q4_K_M | 0.113 GB | medium, balanced quality - recommended |
| [Med_GPT2-Q5_0.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q5_0.gguf) | Q5_0 | 0.122 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Med_GPT2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q5_K_S.gguf) | Q5_K_S | 0.122 GB | large, low quality loss - recommended |
| [Med_GPT2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q5_K_M.gguf) | Q5_K_M | 0.127 GB | large, very low quality loss - recommended |
| [Med_GPT2-Q6_K.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q6_K.gguf) | Q6_K | 0.138 GB | very large, extremely low quality loss |
| [Med_GPT2-Q8_0.gguf](https://huggingface.co/tensorblock/Med_GPT2-GGUF/blob/main/Med_GPT2-Q8_0.gguf) | Q8_0 | 0.178 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Med_GPT2-GGUF --include "Med_GPT2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Med_GPT2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ToRoLaMa-7b-v1.0-GGUF | tensorblock | 2025-04-21T00:32:15Z | 52 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"vi",
"en",
"base_model:allbyai/ToRoLaMa-7b-v1.0",
"base_model:quantized:allbyai/ToRoLaMa-7b-v1.0",
"license:llama2",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-20T07:31:36Z | ---
language:
- vi
- en
license: llama2
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: allbyai/ToRoLaMa-7b-v1.0
model-index:
- name: ToRoLaMa-7b-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=allbyai/ToRoLaMa-7b-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 73.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=allbyai/ToRoLaMa-7b-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=allbyai/ToRoLaMa-7b-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.89
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=allbyai/ToRoLaMa-7b-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=allbyai/ToRoLaMa-7b-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.36
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=allbyai/ToRoLaMa-7b-v1.0
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## allbyai/ToRoLaMa-7b-v1.0 - GGUF
This repo contains GGUF format model files for [allbyai/ToRoLaMa-7b-v1.0](https://huggingface.co/allbyai/ToRoLaMa-7b-v1.0).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ToRoLaMa-7b-v1.0-Q2_K.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q2_K.gguf) | Q2_K | 2.600 GB | smallest, significant quality loss - not recommended for most purposes |
| [ToRoLaMa-7b-v1.0-Q3_K_S.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q3_K_S.gguf) | Q3_K_S | 3.022 GB | very small, high quality loss |
| [ToRoLaMa-7b-v1.0-Q3_K_M.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q3_K_M.gguf) | Q3_K_M | 3.372 GB | very small, high quality loss |
| [ToRoLaMa-7b-v1.0-Q3_K_L.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q3_K_L.gguf) | Q3_K_L | 3.671 GB | small, substantial quality loss |
| [ToRoLaMa-7b-v1.0-Q4_0.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q4_0.gguf) | Q4_0 | 3.907 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ToRoLaMa-7b-v1.0-Q4_K_S.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q4_K_S.gguf) | Q4_K_S | 3.938 GB | small, greater quality loss |
| [ToRoLaMa-7b-v1.0-Q4_K_M.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q4_K_M.gguf) | Q4_K_M | 4.162 GB | medium, balanced quality - recommended |
| [ToRoLaMa-7b-v1.0-Q5_0.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q5_0.gguf) | Q5_0 | 4.740 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ToRoLaMa-7b-v1.0-Q5_K_S.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q5_K_S.gguf) | Q5_K_S | 4.740 GB | large, low quality loss - recommended |
| [ToRoLaMa-7b-v1.0-Q5_K_M.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q5_K_M.gguf) | Q5_K_M | 4.872 GB | large, very low quality loss - recommended |
| [ToRoLaMa-7b-v1.0-Q6_K.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q6_K.gguf) | Q6_K | 5.626 GB | very large, extremely low quality loss |
| [ToRoLaMa-7b-v1.0-Q8_0.gguf](https://huggingface.co/tensorblock/ToRoLaMa-7b-v1.0-GGUF/blob/main/ToRoLaMa-7b-v1.0-Q8_0.gguf) | Q8_0 | 7.286 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ToRoLaMa-7b-v1.0-GGUF --include "ToRoLaMa-7b-v1.0-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ToRoLaMa-7b-v1.0-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF | tensorblock | 2025-04-21T00:32:13Z | 38 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:LordNoah/Alpaca_spin_gpt2_e0_se1",
"base_model:quantized:LordNoah/Alpaca_spin_gpt2_e0_se1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-20T07:20:27Z | ---
license: apache-2.0
base_model: LordNoah/Alpaca_spin_gpt2_e0_se1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## LordNoah/Alpaca_spin_gpt2_e0_se1 - GGUF
This repo contains GGUF format model files for [LordNoah/Alpaca_spin_gpt2_e0_se1](https://huggingface.co/LordNoah/Alpaca_spin_gpt2_e0_se1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Alpaca_spin_gpt2_e0_se1-Q2_K.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q2_K.gguf) | Q2_K | 0.346 GB | smallest, significant quality loss - not recommended for most purposes |
| [Alpaca_spin_gpt2_e0_se1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q3_K_S.gguf) | Q3_K_S | 0.394 GB | very small, high quality loss |
| [Alpaca_spin_gpt2_e0_se1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q3_K_M.gguf) | Q3_K_M | 0.458 GB | very small, high quality loss |
| [Alpaca_spin_gpt2_e0_se1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q3_K_L.gguf) | Q3_K_L | 0.494 GB | small, substantial quality loss |
| [Alpaca_spin_gpt2_e0_se1-Q4_0.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q4_0.gguf) | Q4_0 | 0.497 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Alpaca_spin_gpt2_e0_se1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q4_K_S.gguf) | Q4_K_S | 0.500 GB | small, greater quality loss |
| [Alpaca_spin_gpt2_e0_se1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q4_K_M.gguf) | Q4_K_M | 0.549 GB | medium, balanced quality - recommended |
| [Alpaca_spin_gpt2_e0_se1-Q5_0.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q5_0.gguf) | Q5_0 | 0.593 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Alpaca_spin_gpt2_e0_se1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q5_K_S.gguf) | Q5_K_S | 0.593 GB | large, low quality loss - recommended |
| [Alpaca_spin_gpt2_e0_se1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q5_K_M.gguf) | Q5_K_M | 0.632 GB | large, very low quality loss - recommended |
| [Alpaca_spin_gpt2_e0_se1-Q6_K.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q6_K.gguf) | Q6_K | 0.696 GB | very large, extremely low quality loss |
| [Alpaca_spin_gpt2_e0_se1-Q8_0.gguf](https://huggingface.co/tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF/blob/main/Alpaca_spin_gpt2_e0_se1-Q8_0.gguf) | Q8_0 | 0.898 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF --include "Alpaca_spin_gpt2_e0_se1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Alpaca_spin_gpt2_e0_se1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/jaskier-7b-dpo-GGUF | tensorblock | 2025-04-21T00:32:03Z | 29 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"base_model:bardsai/jaskier-7b-dpo",
"base_model:quantized:bardsai/jaskier-7b-dpo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-20T02:48:12Z | ---
license: apache-2.0
language:
- en
datasets:
- Intel/orca_dpo_pairs
pipeline_tag: conversational
tags:
- TensorBlock
- GGUF
base_model: bardsai/jaskier-7b-dpo
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## bardsai/jaskier-7b-dpo - GGUF
This repo contains GGUF format model files for [bardsai/jaskier-7b-dpo](https://huggingface.co/bardsai/jaskier-7b-dpo).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [jaskier-7b-dpo-Q2_K.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [jaskier-7b-dpo-Q3_K_S.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [jaskier-7b-dpo-Q3_K_M.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [jaskier-7b-dpo-Q3_K_L.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [jaskier-7b-dpo-Q4_0.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [jaskier-7b-dpo-Q4_K_S.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [jaskier-7b-dpo-Q4_K_M.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [jaskier-7b-dpo-Q5_0.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [jaskier-7b-dpo-Q5_K_S.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [jaskier-7b-dpo-Q5_K_M.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [jaskier-7b-dpo-Q6_K.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [jaskier-7b-dpo-Q8_0.gguf](https://huggingface.co/tensorblock/jaskier-7b-dpo-GGUF/blob/main/jaskier-7b-dpo-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/jaskier-7b-dpo-GGUF --include "jaskier-7b-dpo-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/jaskier-7b-dpo-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MiniCPM-2B-dpo-fp16-GGUF | tensorblock | 2025-04-21T00:32:01Z | 38 | 0 | null | [
"gguf",
"MiniCPM",
"ModelBest",
"THUNLP",
"TensorBlock",
"GGUF",
"en",
"zh",
"base_model:openbmb/MiniCPM-2B-dpo-fp16",
"base_model:quantized:openbmb/MiniCPM-2B-dpo-fp16",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-20T02:35:39Z | ---
language:
- en
- zh
tags:
- MiniCPM
- ModelBest
- THUNLP
- TensorBlock
- GGUF
base_model: openbmb/MiniCPM-2B-dpo-fp16
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## openbmb/MiniCPM-2B-dpo-fp16 - GGUF
This repo contains GGUF format model files for [openbmb/MiniCPM-2B-dpo-fp16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
{system_prompt}<η¨ζ·>{prompt}<AI>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MiniCPM-2B-dpo-fp16-Q2_K.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q2_K.gguf) | Q2_K | 1.204 GB | smallest, significant quality loss - not recommended for most purposes |
| [MiniCPM-2B-dpo-fp16-Q3_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q3_K_S.gguf) | Q3_K_S | 1.355 GB | very small, high quality loss |
| [MiniCPM-2B-dpo-fp16-Q3_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q3_K_M.gguf) | Q3_K_M | 1.481 GB | very small, high quality loss |
| [MiniCPM-2B-dpo-fp16-Q3_K_L.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q3_K_L.gguf) | Q3_K_L | 1.564 GB | small, substantial quality loss |
| [MiniCPM-2B-dpo-fp16-Q4_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q4_0.gguf) | Q4_0 | 1.609 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MiniCPM-2B-dpo-fp16-Q4_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q4_K_S.gguf) | Q4_K_S | 1.682 GB | small, greater quality loss |
| [MiniCPM-2B-dpo-fp16-Q4_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q4_K_M.gguf) | Q4_K_M | 1.802 GB | medium, balanced quality - recommended |
| [MiniCPM-2B-dpo-fp16-Q5_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q5_0.gguf) | Q5_0 | 1.914 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MiniCPM-2B-dpo-fp16-Q5_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q5_K_S.gguf) | Q5_K_S | 1.948 GB | large, low quality loss - recommended |
| [MiniCPM-2B-dpo-fp16-Q5_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q5_K_M.gguf) | Q5_K_M | 2.045 GB | large, very low quality loss - recommended |
| [MiniCPM-2B-dpo-fp16-Q6_K.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q6_K.gguf) | Q6_K | 2.367 GB | very large, extremely low quality loss |
| [MiniCPM-2B-dpo-fp16-Q8_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-dpo-fp16-GGUF/blob/main/MiniCPM-2B-dpo-fp16-Q8_0.gguf) | Q8_0 | 2.899 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MiniCPM-2B-dpo-fp16-GGUF --include "MiniCPM-2B-dpo-fp16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MiniCPM-2B-dpo-fp16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF | tensorblock | 2025-04-21T00:31:58Z | 54 | 0 | null | [
"gguf",
"llm",
"fine-tune",
"yi",
"TensorBlock",
"GGUF",
"dataset:adamo1139/AEZAKMI_v2",
"base_model:adamo1139/Yi-34B-200K-AEZAKMI-v2",
"base_model:quantized:adamo1139/Yi-34B-200K-AEZAKMI-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-20T01:23:00Z | ---
license: apache-2.0
tags:
- llm
- fine-tune
- yi
- TensorBlock
- GGUF
datasets:
- adamo1139/AEZAKMI_v2
license_name: yi-license
license_link: LICENSE
base_model: adamo1139/Yi-34B-200K-AEZAKMI-v2
model-index:
- name: Yi-34B-200K-AEZAKMI-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.74
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 45.55
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.28
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 4.83
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.96
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.48
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.03
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## adamo1139/Yi-34B-200K-AEZAKMI-v2 - GGUF
This repo contains GGUF format model files for [adamo1139/Yi-34B-200K-AEZAKMI-v2](https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-34B-200K-AEZAKMI-v2-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yi-34B-200K-AEZAKMI-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss |
| [Yi-34B-200K-AEZAKMI-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss |
| [Yi-34B-200K-AEZAKMI-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss |
| [Yi-34B-200K-AEZAKMI-v2-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-34B-200K-AEZAKMI-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss |
| [Yi-34B-200K-AEZAKMI-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended |
| [Yi-34B-200K-AEZAKMI-v2-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-34B-200K-AEZAKMI-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended |
| [Yi-34B-200K-AEZAKMI-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended |
| [Yi-34B-200K-AEZAKMI-v2-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss |
| [Yi-34B-200K-AEZAKMI-v2-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF/blob/main/Yi-34B-200K-AEZAKMI-v2-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF --include "Yi-34B-200K-AEZAKMI-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Yi-34B-200K-AEZAKMI-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF | tensorblock | 2025-04-21T00:31:57Z | 75 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:UCLA-AGI/SPIN_iter3",
"base_model:UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3",
"base_model:quantized:UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-20T01:14:17Z | ---
license: mit
datasets:
- UCLA-AGI/SPIN_iter3
language:
- en
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3 - GGUF
This repo contains GGUF format model files for [UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3](https://huggingface.co/UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [zephyr-7b-sft-full-SPIN-iter3-Q2_K.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [zephyr-7b-sft-full-SPIN-iter3-Q3_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [zephyr-7b-sft-full-SPIN-iter3-Q3_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [zephyr-7b-sft-full-SPIN-iter3-Q3_K_L.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [zephyr-7b-sft-full-SPIN-iter3-Q4_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zephyr-7b-sft-full-SPIN-iter3-Q4_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [zephyr-7b-sft-full-SPIN-iter3-Q4_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [zephyr-7b-sft-full-SPIN-iter3-Q5_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zephyr-7b-sft-full-SPIN-iter3-Q5_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [zephyr-7b-sft-full-SPIN-iter3-Q5_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [zephyr-7b-sft-full-SPIN-iter3-Q6_K.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [zephyr-7b-sft-full-SPIN-iter3-Q8_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF/blob/main/zephyr-7b-sft-full-SPIN-iter3-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF --include "zephyr-7b-sft-full-SPIN-iter3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/zephyr-7b-sft-full-SPIN-iter3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mixtral_7Bx5_MoE_30B-GGUF | tensorblock | 2025-04-21T00:31:54Z | 40 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:cloudyu/Mixtral_7Bx5_MoE_30B",
"base_model:quantized:cloudyu/Mixtral_7Bx5_MoE_30B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-19T23:21:56Z | ---
license: cc-by-nc-4.0
tags:
- TensorBlock
- GGUF
base_model: cloudyu/Mixtral_7Bx5_MoE_30B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cloudyu/Mixtral_7Bx5_MoE_30B - GGUF
This repo contains GGUF format model files for [cloudyu/Mixtral_7Bx5_MoE_30B](https://huggingface.co/cloudyu/Mixtral_7Bx5_MoE_30B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mixtral_7Bx5_MoE_30B-Q2_K.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q2_K.gguf) | Q2_K | 10.884 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mixtral_7Bx5_MoE_30B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q3_K_S.gguf) | Q3_K_S | 12.856 GB | very small, high quality loss |
| [Mixtral_7Bx5_MoE_30B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q3_K_M.gguf) | Q3_K_M | 14.267 GB | very small, high quality loss |
| [Mixtral_7Bx5_MoE_30B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q3_K_L.gguf) | Q3_K_L | 15.451 GB | small, substantial quality loss |
| [Mixtral_7Bx5_MoE_30B-Q4_0.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q4_0.gguf) | Q4_0 | 16.795 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mixtral_7Bx5_MoE_30B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q4_K_S.gguf) | Q4_K_S | 16.944 GB | small, greater quality loss |
| [Mixtral_7Bx5_MoE_30B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q4_K_M.gguf) | Q4_K_M | 18.024 GB | medium, balanced quality - recommended |
| [Mixtral_7Bx5_MoE_30B-Q5_0.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q5_0.gguf) | Q5_0 | 20.502 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mixtral_7Bx5_MoE_30B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q5_K_S.gguf) | Q5_K_S | 20.502 GB | large, low quality loss - recommended |
| [Mixtral_7Bx5_MoE_30B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q5_K_M.gguf) | Q5_K_M | 21.135 GB | large, very low quality loss - recommended |
| [Mixtral_7Bx5_MoE_30B-Q6_K.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q6_K.gguf) | Q6_K | 24.442 GB | very large, extremely low quality loss |
| [Mixtral_7Bx5_MoE_30B-Q8_0.gguf](https://huggingface.co/tensorblock/Mixtral_7Bx5_MoE_30B-GGUF/blob/main/Mixtral_7Bx5_MoE_30B-Q8_0.gguf) | Q8_0 | 31.656 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mixtral_7Bx5_MoE_30B-GGUF --include "Mixtral_7Bx5_MoE_30B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mixtral_7Bx5_MoE_30B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF | tensorblock | 2025-04-21T00:31:51Z | 76 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE",
"base_model:quantized:SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-19T22:30:36Z | ---
license: cc-by-nc-4.0
tags:
- merge
- TensorBlock
- GGUF
base_model: SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE - GGUF
This repo contains GGUF format model files for [SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q2_K.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_S.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_M.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_L.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_0.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_S.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_M.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_0.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_S.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_M.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q6_K.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Loyal-Toppy-Bruins-Maid-7B-DARE-Q8_0.gguf](https://huggingface.co/tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF/blob/main/Loyal-Toppy-Bruins-Maid-7B-DARE-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF --include "Loyal-Toppy-Bruins-Maid-7B-DARE-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mozaic-7B-GGUF | tensorblock | 2025-04-21T00:31:50Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:VitalContribution/Evangelion-7B",
"base_model:quantized:VitalContribution/Evangelion-7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-19T21:52:43Z | ---
license: apache-2.0
library_name: transformers
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: MozaicAI/Mozaic-7B
model-index:
- name: Evangelion-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VitalContribution/Evangelion-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VitalContribution/Evangelion-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VitalContribution/Evangelion-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.01
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VitalContribution/Evangelion-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VitalContribution/Evangelion-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VitalContribution/Evangelion-7B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## MozaicAI/Mozaic-7B - GGUF
This repo contains GGUF format model files for [MozaicAI/Mozaic-7B](https://huggingface.co/MozaicAI/Mozaic-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mozaic-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mozaic-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mozaic-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mozaic-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mozaic-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mozaic-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mozaic-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mozaic-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mozaic-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mozaic-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mozaic-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mozaic-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Mozaic-7B-GGUF/blob/main/Mozaic-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mozaic-7B-GGUF --include "Mozaic-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mozaic-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MetaModelv3-GGUF | tensorblock | 2025-04-21T00:31:47Z | 28 | 0 | null | [
"gguf",
"MetaModelv3",
"merge",
"TensorBlock",
"GGUF",
"base_model:gagan3012/MetaModelv3",
"base_model:quantized:gagan3012/MetaModelv3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-19T21:02:59Z | ---
license: apache-2.0
tags:
- MetaModelv3
- merge
- TensorBlock
- GGUF
base_model: gagan3012/MetaModelv3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## gagan3012/MetaModelv3 - GGUF
This repo contains GGUF format model files for [gagan3012/MetaModelv3](https://huggingface.co/gagan3012/MetaModelv3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MetaModelv3-Q2_K.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [MetaModelv3-Q3_K_S.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [MetaModelv3-Q3_K_M.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [MetaModelv3-Q3_K_L.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [MetaModelv3-Q4_0.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MetaModelv3-Q4_K_S.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [MetaModelv3-Q4_K_M.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [MetaModelv3-Q5_0.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MetaModelv3-Q5_K_S.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [MetaModelv3-Q5_K_M.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [MetaModelv3-Q6_K.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [MetaModelv3-Q8_0.gguf](https://huggingface.co/tensorblock/MetaModelv3-GGUF/blob/main/MetaModelv3-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MetaModelv3-GGUF --include "MetaModelv3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MetaModelv3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF | tensorblock | 2025-04-21T00:31:43Z | 33 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:Weyaxi/openchat-3.5-1210-Seraph-Slerp",
"base_model:quantized:Weyaxi/openchat-3.5-1210-Seraph-Slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-19T19:53:51Z | ---
license: apache-2.0
tags:
- merge
- TensorBlock
- GGUF
base_model: Weyaxi/openchat-3.5-1210-Seraph-Slerp
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Weyaxi/openchat-3.5-1210-Seraph-Slerp - GGUF
This repo contains GGUF format model files for [Weyaxi/openchat-3.5-1210-Seraph-Slerp](https://huggingface.co/Weyaxi/openchat-3.5-1210-Seraph-Slerp).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [openchat-3.5-1210-Seraph-Slerp-Q2_K.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [openchat-3.5-1210-Seraph-Slerp-Q3_K_S.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [openchat-3.5-1210-Seraph-Slerp-Q3_K_M.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [openchat-3.5-1210-Seraph-Slerp-Q3_K_L.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [openchat-3.5-1210-Seraph-Slerp-Q4_0.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openchat-3.5-1210-Seraph-Slerp-Q4_K_S.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [openchat-3.5-1210-Seraph-Slerp-Q4_K_M.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [openchat-3.5-1210-Seraph-Slerp-Q5_0.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openchat-3.5-1210-Seraph-Slerp-Q5_K_S.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [openchat-3.5-1210-Seraph-Slerp-Q5_K_M.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [openchat-3.5-1210-Seraph-Slerp-Q6_K.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [openchat-3.5-1210-Seraph-Slerp-Q8_0.gguf](https://huggingface.co/tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF/blob/main/openchat-3.5-1210-Seraph-Slerp-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF --include "openchat-3.5-1210-Seraph-Slerp-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/openchat-3.5-1210-Seraph-Slerp-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mistral_v1-GGUF | tensorblock | 2025-04-21T00:31:42Z | 43 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:vikash06/mistral_v1",
"base_model:quantized:vikash06/mistral_v1",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-19T19:15:42Z | ---
license: mit
tags:
- TensorBlock
- GGUF
base_model: vikash06/mistral_v1
model-index:
- name: mistral_v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 47.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 67.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.53
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 9.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/mistral_v1
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## vikash06/mistral_v1 - GGUF
This repo contains GGUF format model files for [vikash06/mistral_v1](https://huggingface.co/vikash06/mistral_v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mistral_v1-Q2_K.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral_v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [mistral_v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [mistral_v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [mistral_v1-Q4_0.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral_v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [mistral_v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [mistral_v1-Q5_0.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral_v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [mistral_v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [mistral_v1-Q6_K.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [mistral_v1-Q8_0.gguf](https://huggingface.co/tensorblock/mistral_v1-GGUF/blob/main/mistral_v1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mistral_v1-GGUF --include "mistral_v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mistral_v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/bagel-dpo-7b-v0.1-GGUF | tensorblock | 2025-04-21T00:31:41Z | 38 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"base_model:jondurbin/bagel-dpo-7b-v0.1",
"base_model:quantized:jondurbin/bagel-dpo-7b-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-19T18:49:17Z | ---
license: apache-2.0
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
tags:
- TensorBlock
- GGUF
base_model: jondurbin/bagel-dpo-7b-v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jondurbin/bagel-dpo-7b-v0.1 - GGUF
This repo contains GGUF format model files for [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [bagel-dpo-7b-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [bagel-dpo-7b-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [bagel-dpo-7b-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [bagel-dpo-7b-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [bagel-dpo-7b-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [bagel-dpo-7b-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [bagel-dpo-7b-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [bagel-dpo-7b-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [bagel-dpo-7b-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [bagel-dpo-7b-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [bagel-dpo-7b-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [bagel-dpo-7b-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/bagel-dpo-7b-v0.1-GGUF/blob/main/bagel-dpo-7b-v0.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/bagel-dpo-7b-v0.1-GGUF --include "bagel-dpo-7b-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/bagel-dpo-7b-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF | tensorblock | 2025-04-21T00:31:34Z | 13 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:macadeliccc/SOLAR-math-2x10.7b-v0.2",
"base_model:quantized:macadeliccc/SOLAR-math-2x10.7b-v0.2",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-19T12:48:17Z | ---
license: cc-by-nc-4.0
base_model: macadeliccc/SOLAR-math-2x10.7b-v0.2
tags:
- TensorBlock
- GGUF
model-index:
- name: SOLAR-math-2x10.7b-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.68
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## macadeliccc/SOLAR-math-2x10.7b-v0.2 - GGUF
This repo contains GGUF format model files for [macadeliccc/SOLAR-math-2x10.7b-v0.2](https://huggingface.co/macadeliccc/SOLAR-math-2x10.7b-v0.2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SOLAR-math-2x10.7b-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q2_K.gguf) | Q2_K | 7.066 GB | smallest, significant quality loss - not recommended for most purposes |
| [SOLAR-math-2x10.7b-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q3_K_S.gguf) | Q3_K_S | 8.299 GB | very small, high quality loss |
| [SOLAR-math-2x10.7b-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q3_K_M.gguf) | Q3_K_M | 9.227 GB | very small, high quality loss |
| [SOLAR-math-2x10.7b-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q3_K_L.gguf) | Q3_K_L | 10.012 GB | small, substantial quality loss |
| [SOLAR-math-2x10.7b-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q4_0.gguf) | Q4_0 | 10.830 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SOLAR-math-2x10.7b-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q4_K_S.gguf) | Q4_K_S | 10.920 GB | small, greater quality loss |
| [SOLAR-math-2x10.7b-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q4_K_M.gguf) | Q4_K_M | 11.583 GB | medium, balanced quality - recommended |
| [SOLAR-math-2x10.7b-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q5_0.gguf) | Q5_0 | 13.212 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SOLAR-math-2x10.7b-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q5_K_S.gguf) | Q5_K_S | 13.212 GB | large, low quality loss - recommended |
| [SOLAR-math-2x10.7b-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q5_K_M.gguf) | Q5_K_M | 13.600 GB | large, very low quality loss - recommended |
| [SOLAR-math-2x10.7b-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q6_K.gguf) | Q6_K | 15.743 GB | very large, extremely low quality loss |
| [SOLAR-math-2x10.7b-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF/blob/main/SOLAR-math-2x10.7b-v0.2-Q8_0.gguf) | Q8_0 | 20.390 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF --include "SOLAR-math-2x10.7b-v0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SOLAR-math-2x10.7b-v0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Tenebra_30B_Alpha01_FP16-GGUF | tensorblock | 2025-04-21T00:31:33Z | 17 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:SicariusSicariiStuff/Tenebra_30B_Alpha01",
"base_model:quantized:SicariusSicariiStuff/Tenebra_30B_Alpha01",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-19T12:12:34Z | ---
language:
- en
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16 - GGUF
This repo contains GGUF format model files for [SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Tenebra_30B_Alpha01_FP16-Q2_K.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q2_K.gguf) | Q2_K | 12.049 GB | smallest, significant quality loss - not recommended for most purposes |
| [Tenebra_30B_Alpha01_FP16-Q3_K_S.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q3_K_S.gguf) | Q3_K_S | 14.064 GB | very small, high quality loss |
| [Tenebra_30B_Alpha01_FP16-Q3_K_M.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q3_K_M.gguf) | Q3_K_M | 15.776 GB | very small, high quality loss |
| [Tenebra_30B_Alpha01_FP16-Q3_K_L.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q3_K_L.gguf) | Q3_K_L | 17.280 GB | small, substantial quality loss |
| [Tenebra_30B_Alpha01_FP16-Q4_0.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q4_0.gguf) | Q4_0 | 18.356 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Tenebra_30B_Alpha01_FP16-Q4_K_S.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q4_K_S.gguf) | Q4_K_S | 18.482 GB | small, greater quality loss |
| [Tenebra_30B_Alpha01_FP16-Q4_K_M.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q4_K_M.gguf) | Q4_K_M | 19.621 GB | medium, balanced quality - recommended |
| [Tenebra_30B_Alpha01_FP16-Q5_0.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q5_0.gguf) | Q5_0 | 22.395 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Tenebra_30B_Alpha01_FP16-Q5_K_S.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q5_K_S.gguf) | Q5_K_S | 22.395 GB | large, low quality loss - recommended |
| [Tenebra_30B_Alpha01_FP16-Q5_K_M.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q5_K_M.gguf) | Q5_K_M | 23.047 GB | large, very low quality loss - recommended |
| [Tenebra_30B_Alpha01_FP16-Q6_K.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q6_K.gguf) | Q6_K | 26.687 GB | very large, extremely low quality loss |
| [Tenebra_30B_Alpha01_FP16-Q8_0.gguf](https://huggingface.co/tensorblock/Tenebra_30B_Alpha01_FP16-GGUF/blob/main/Tenebra_30B_Alpha01_FP16-Q8_0.gguf) | Q8_0 | 34.565 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Tenebra_30B_Alpha01_FP16-GGUF --include "Tenebra_30B_Alpha01_FP16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Tenebra_30B_Alpha01_FP16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF | tensorblock | 2025-04-21T00:31:32Z | 52 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:luffycodes/vicuna-class-shishya-all-hal-13b-ep3",
"base_model:quantized:luffycodes/vicuna-class-shishya-all-hal-13b-ep3",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-19T11:21:20Z | ---
license: llama2
base_model: luffycodes/vicuna-class-shishya-all-hal-13b-ep3
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## luffycodes/vicuna-class-shishya-all-hal-13b-ep3 - GGUF
This repo contains GGUF format model files for [luffycodes/vicuna-class-shishya-all-hal-13b-ep3](https://huggingface.co/luffycodes/vicuna-class-shishya-all-hal-13b-ep3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [vicuna-class-shishya-all-hal-13b-ep3-Q2_K.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [vicuna-class-shishya-all-hal-13b-ep3-Q3_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [vicuna-class-shishya-all-hal-13b-ep3-Q3_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [vicuna-class-shishya-all-hal-13b-ep3-Q3_K_L.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [vicuna-class-shishya-all-hal-13b-ep3-Q4_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vicuna-class-shishya-all-hal-13b-ep3-Q4_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [vicuna-class-shishya-all-hal-13b-ep3-Q4_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [vicuna-class-shishya-all-hal-13b-ep3-Q5_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vicuna-class-shishya-all-hal-13b-ep3-Q5_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [vicuna-class-shishya-all-hal-13b-ep3-Q5_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [vicuna-class-shishya-all-hal-13b-ep3-Q6_K.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [vicuna-class-shishya-all-hal-13b-ep3-Q8_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF/blob/main/vicuna-class-shishya-all-hal-13b-ep3-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF --include "vicuna-class-shishya-all-hal-13b-ep3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/vicuna-class-shishya-all-hal-13b-ep3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MistralTrix-v1-GGUF | tensorblock | 2025-04-21T00:31:29Z | 63 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:CultriX/MistralTrix-v1",
"base_model:quantized:CultriX/MistralTrix-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-19T10:18:44Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
dtype: bfloat16
tags:
- merge
- TensorBlock
- GGUF
base_model: CultriX/MistralTrix-v1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## CultriX/MistralTrix-v1 - GGUF
This repo contains GGUF format model files for [CultriX/MistralTrix-v1](https://huggingface.co/CultriX/MistralTrix-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MistralTrix-v1-Q2_K.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q2_K.gguf) | Q2_K | 3.361 GB | smallest, significant quality loss - not recommended for most purposes |
| [MistralTrix-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q3_K_S.gguf) | Q3_K_S | 3.915 GB | very small, high quality loss |
| [MistralTrix-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q3_K_M.gguf) | Q3_K_M | 4.354 GB | very small, high quality loss |
| [MistralTrix-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q3_K_L.gguf) | Q3_K_L | 4.736 GB | small, substantial quality loss |
| [MistralTrix-v1-Q4_0.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q4_0.gguf) | Q4_0 | 5.091 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MistralTrix-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q4_K_S.gguf) | Q4_K_S | 5.129 GB | small, greater quality loss |
| [MistralTrix-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q4_K_M.gguf) | Q4_K_M | 5.415 GB | medium, balanced quality - recommended |
| [MistralTrix-v1-Q5_0.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q5_0.gguf) | Q5_0 | 6.198 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MistralTrix-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q5_K_S.gguf) | Q5_K_S | 6.198 GB | large, low quality loss - recommended |
| [MistralTrix-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q5_K_M.gguf) | Q5_K_M | 6.365 GB | large, very low quality loss - recommended |
| [MistralTrix-v1-Q6_K.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q6_K.gguf) | Q6_K | 7.374 GB | very large, extremely low quality loss |
| [MistralTrix-v1-Q8_0.gguf](https://huggingface.co/tensorblock/MistralTrix-v1-GGUF/blob/main/MistralTrix-v1-Q8_0.gguf) | Q8_0 | 9.550 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MistralTrix-v1-GGUF --include "MistralTrix-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MistralTrix-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CarbonVillain-en-10.7B-v2-GGUF | tensorblock | 2025-04-21T00:31:27Z | 112 | 0 | null | [
"gguf",
"merge",
"slerp",
"TensorBlock",
"GGUF",
"en",
"base_model:jeonsworld/CarbonVillain-en-10.7B-v2",
"base_model:quantized:jeonsworld/CarbonVillain-en-10.7B-v2",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-19T06:44:58Z | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- merge
- slerp
- TensorBlock
- GGUF
base_model: jeonsworld/CarbonVillain-en-10.7B-v2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jeonsworld/CarbonVillain-en-10.7B-v2 - GGUF
This repo contains GGUF format model files for [jeonsworld/CarbonVillain-en-10.7B-v2](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CarbonVillain-en-10.7B-v2-Q2_K.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [CarbonVillain-en-10.7B-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [CarbonVillain-en-10.7B-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [CarbonVillain-en-10.7B-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [CarbonVillain-en-10.7B-v2-Q4_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [CarbonVillain-en-10.7B-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [CarbonVillain-en-10.7B-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [CarbonVillain-en-10.7B-v2-Q5_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [CarbonVillain-en-10.7B-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [CarbonVillain-en-10.7B-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [CarbonVillain-en-10.7B-v2-Q6_K.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [CarbonVillain-en-10.7B-v2-Q8_0.gguf](https://huggingface.co/tensorblock/CarbonVillain-en-10.7B-v2-GGUF/blob/main/CarbonVillain-en-10.7B-v2-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CarbonVillain-en-10.7B-v2-GGUF --include "CarbonVillain-en-10.7B-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CarbonVillain-en-10.7B-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Yi-Ko-34B-GGUF | tensorblock | 2025-04-21T00:31:24Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"pytorch",
"Yi-Ko",
"01-ai",
"Yi",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"ko",
"base_model:beomi/Yi-Ko-34B",
"base_model:quantized:beomi/Yi-Ko-34B",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-12-19T01:18:27Z | ---
extra_gated_heading: Access beomi/Yi-Ko-34B on Hugging Face
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username: checkbox
? I confirm that I understand this project is for research purposes only, and confirm
that I agree to follow the LICENSE of this model
: checkbox
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- pytorch
- Yi-Ko
- 01-ai
- Yi
- TensorBlock
- GGUF
library_name: transformers
license: apache-2.0
base_model: beomi/Yi-Ko-34B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## beomi/Yi-Ko-34B - GGUF
This repo contains GGUF format model files for [beomi/Yi-Ko-34B](https://huggingface.co/beomi/Yi-Ko-34B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-Ko-34B-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q2_K.gguf) | Q2_K | 12.945 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yi-Ko-34B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q3_K_S.gguf) | Q3_K_S | 15.090 GB | very small, high quality loss |
| [Yi-Ko-34B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q3_K_M.gguf) | Q3_K_M | 16.785 GB | very small, high quality loss |
| [Yi-Ko-34B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q3_K_L.gguf) | Q3_K_L | 18.269 GB | small, substantial quality loss |
| [Yi-Ko-34B-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q4_0.gguf) | Q4_0 | 19.610 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-Ko-34B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q4_K_S.gguf) | Q4_K_S | 19.742 GB | small, greater quality loss |
| [Yi-Ko-34B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q4_K_M.gguf) | Q4_K_M | 20.802 GB | medium, balanced quality - recommended |
| [Yi-Ko-34B-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q5_0.gguf) | Q5_0 | 23.864 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-Ko-34B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q5_K_S.gguf) | Q5_K_S | 23.864 GB | large, low quality loss - recommended |
| [Yi-Ko-34B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q5_K_M.gguf) | Q5_K_M | 24.479 GB | large, very low quality loss - recommended |
| [Yi-Ko-34B-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q6_K.gguf) | Q6_K | 28.384 GB | very large, extremely low quality loss |
| [Yi-Ko-34B-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-34B-GGUF/blob/main/Yi-Ko-34B-Q8_0.gguf) | Q8_0 | 36.763 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Yi-Ko-34B-GGUF --include "Yi-Ko-34B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Yi-Ko-34B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Lonepino-11B-GGUF | tensorblock | 2025-04-21T00:31:22Z | 24 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:beberik/Lonepino-11B",
"base_model:quantized:beberik/Lonepino-11B",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-19T01:14:04Z | ---
license: cc-by-nc-4.0
tags:
- merge
- TensorBlock
- GGUF
base_model: beberik/Lonepino-11B
model-index:
- name: Lonepino-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.76
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## beberik/Lonepino-11B - GGUF
This repo contains GGUF format model files for [beberik/Lonepino-11B](https://huggingface.co/beberik/Lonepino-11B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Lonepino-11B-Q2_K.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [Lonepino-11B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [Lonepino-11B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [Lonepino-11B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [Lonepino-11B-Q4_0.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Lonepino-11B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [Lonepino-11B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [Lonepino-11B-Q5_0.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Lonepino-11B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [Lonepino-11B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [Lonepino-11B-Q6_K.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [Lonepino-11B-Q8_0.gguf](https://huggingface.co/tensorblock/Lonepino-11B-GGUF/blob/main/Lonepino-11B-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Lonepino-11B-GGUF --include "Lonepino-11B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Lonepino-11B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Phoenix-v1-8x7B-GGUF | tensorblock | 2025-04-21T00:31:18Z | 31 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:jan-hq/Phoenix-v1-8x7B",
"base_model:quantized:jan-hq/Phoenix-v1-8x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-18T21:09:41Z | ---
license: apache-2.0
language:
- en
base_model: jan-hq/Phoenix-v1-8x7B
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jan-hq/Phoenix-v1-8x7B - GGUF
This repo contains GGUF format model files for [jan-hq/Phoenix-v1-8x7B](https://huggingface.co/jan-hq/Phoenix-v1-8x7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Phoenix-v1-8x7B-Q2_K.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [Phoenix-v1-8x7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [Phoenix-v1-8x7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [Phoenix-v1-8x7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [Phoenix-v1-8x7B-Q4_0.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Phoenix-v1-8x7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [Phoenix-v1-8x7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [Phoenix-v1-8x7B-Q5_0.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Phoenix-v1-8x7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [Phoenix-v1-8x7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [Phoenix-v1-8x7B-Q6_K.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [Phoenix-v1-8x7B-Q8_0.gguf](https://huggingface.co/tensorblock/Phoenix-v1-8x7B-GGUF/blob/main/Phoenix-v1-8x7B-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Phoenix-v1-8x7B-GGUF --include "Phoenix-v1-8x7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Phoenix-v1-8x7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MiniCPM-2B-sft-bf16-GGUF | tensorblock | 2025-04-21T00:31:17Z | 47 | 0 | null | [
"gguf",
"MiniCPM",
"ModelBest",
"THUNLP",
"TensorBlock",
"GGUF",
"en",
"zh",
"base_model:openbmb/MiniCPM-2B-sft-bf16",
"base_model:quantized:openbmb/MiniCPM-2B-sft-bf16",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-18T20:34:18Z | ---
language:
- en
- zh
tags:
- MiniCPM
- ModelBest
- THUNLP
- TensorBlock
- GGUF
base_model: openbmb/MiniCPM-2B-sft-bf16
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## openbmb/MiniCPM-2B-sft-bf16 - GGUF
This repo contains GGUF format model files for [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
{system_prompt}<η¨ζ·>{prompt}<AI>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MiniCPM-2B-sft-bf16-Q2_K.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q2_K.gguf) | Q2_K | 1.204 GB | smallest, significant quality loss - not recommended for most purposes |
| [MiniCPM-2B-sft-bf16-Q3_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q3_K_S.gguf) | Q3_K_S | 1.355 GB | very small, high quality loss |
| [MiniCPM-2B-sft-bf16-Q3_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q3_K_M.gguf) | Q3_K_M | 1.481 GB | very small, high quality loss |
| [MiniCPM-2B-sft-bf16-Q3_K_L.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q3_K_L.gguf) | Q3_K_L | 1.564 GB | small, substantial quality loss |
| [MiniCPM-2B-sft-bf16-Q4_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q4_0.gguf) | Q4_0 | 1.609 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MiniCPM-2B-sft-bf16-Q4_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q4_K_S.gguf) | Q4_K_S | 1.682 GB | small, greater quality loss |
| [MiniCPM-2B-sft-bf16-Q4_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q4_K_M.gguf) | Q4_K_M | 1.802 GB | medium, balanced quality - recommended |
| [MiniCPM-2B-sft-bf16-Q5_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q5_0.gguf) | Q5_0 | 1.914 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MiniCPM-2B-sft-bf16-Q5_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q5_K_S.gguf) | Q5_K_S | 1.948 GB | large, low quality loss - recommended |
| [MiniCPM-2B-sft-bf16-Q5_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q5_K_M.gguf) | Q5_K_M | 2.045 GB | large, very low quality loss - recommended |
| [MiniCPM-2B-sft-bf16-Q6_K.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q6_K.gguf) | Q6_K | 2.367 GB | very large, extremely low quality loss |
| [MiniCPM-2B-sft-bf16-Q8_0.gguf](https://huggingface.co/tensorblock/MiniCPM-2B-sft-bf16-GGUF/blob/main/MiniCPM-2B-sft-bf16-Q8_0.gguf) | Q8_0 | 2.899 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MiniCPM-2B-sft-bf16-GGUF --include "MiniCPM-2B-sft-bf16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MiniCPM-2B-sft-bf16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Multilingual-mistral-GGUF | tensorblock | 2025-04-21T00:31:15Z | 30 | 0 | null | [
"gguf",
"moe",
"mixtral",
"openchat/openchat-3.5-0106",
"giux78/zefiro-7b-beta-ITA-v0.1",
"azale-ai/Starstreak-7b-beta",
"gagan3012/Mistral_arabic_dpo",
"davidkim205/komt-mistral-7b-v1",
"OpenBuddy/openbuddy-zephyr-7b-v14.1",
"manishiitg/open-aditi-hi-v1",
"VAGOsolutions/SauerkrautLM-7b-v1-mistral",
"TensorBlock",
"GGUF",
"base_model:gagan3012/Multilingual-mistral",
"base_model:quantized:gagan3012/Multilingual-mistral",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-18T20:23:22Z | ---
license: apache-2.0
tags:
- moe
- mixtral
- openchat/openchat-3.5-0106
- giux78/zefiro-7b-beta-ITA-v0.1
- azale-ai/Starstreak-7b-beta
- gagan3012/Mistral_arabic_dpo
- davidkim205/komt-mistral-7b-v1
- OpenBuddy/openbuddy-zephyr-7b-v14.1
- manishiitg/open-aditi-hi-v1
- VAGOsolutions/SauerkrautLM-7b-v1-mistral
- TensorBlock
- GGUF
base_model: gagan3012/Multilingual-mistral
model-index:
- name: Multilingual-mistral
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.53
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## gagan3012/Multilingual-mistral - GGUF
This repo contains GGUF format model files for [gagan3012/Multilingual-mistral](https://huggingface.co/gagan3012/Multilingual-mistral).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Multilingual-mistral-Q2_K.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [Multilingual-mistral-Q3_K_S.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [Multilingual-mistral-Q3_K_M.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [Multilingual-mistral-Q3_K_L.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [Multilingual-mistral-Q4_0.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Multilingual-mistral-Q4_K_S.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [Multilingual-mistral-Q4_K_M.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [Multilingual-mistral-Q5_0.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Multilingual-mistral-Q5_K_S.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [Multilingual-mistral-Q5_K_M.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [Multilingual-mistral-Q6_K.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [Multilingual-mistral-Q8_0.gguf](https://huggingface.co/tensorblock/Multilingual-mistral-GGUF/blob/main/Multilingual-mistral-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Multilingual-mistral-GGUF --include "Multilingual-mistral-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Multilingual-mistral-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Lelantos-DPO-7B-GGUF | tensorblock | 2025-04-21T00:31:08Z | 36 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:SanjiWatsuki/Lelantos-DPO-7B",
"base_model:quantized:SanjiWatsuki/Lelantos-DPO-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-18T15:49:32Z | ---
license: cc-by-nc-4.0
base_model: SanjiWatsuki/Lelantos-DPO-7B
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## SanjiWatsuki/Lelantos-DPO-7B - GGUF
This repo contains GGUF format model files for [SanjiWatsuki/Lelantos-DPO-7B](https://huggingface.co/SanjiWatsuki/Lelantos-DPO-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Lelantos-DPO-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Lelantos-DPO-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Lelantos-DPO-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Lelantos-DPO-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Lelantos-DPO-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Lelantos-DPO-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Lelantos-DPO-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Lelantos-DPO-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Lelantos-DPO-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Lelantos-DPO-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Lelantos-DPO-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Lelantos-DPO-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Lelantos-DPO-7B-GGUF/blob/main/Lelantos-DPO-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Lelantos-DPO-7B-GGUF --include "Lelantos-DPO-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Lelantos-DPO-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MDBX-7B-GGUF | tensorblock | 2025-04-21T00:31:07Z | 39 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"leveldevai/MarcDareBeagle-7B",
"leveldevai/MarcBeagle-7B",
"TensorBlock",
"GGUF",
"base_model:flemmingmiguel/MDBX-7B",
"base_model:quantized:flemmingmiguel/MDBX-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-18T15:27:50Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- leveldevai/MarcDareBeagle-7B
- leveldevai/MarcBeagle-7B
- TensorBlock
- GGUF
base_model: flemmingmiguel/MDBX-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## flemmingmiguel/MDBX-7B - GGUF
This repo contains GGUF format model files for [flemmingmiguel/MDBX-7B](https://huggingface.co/flemmingmiguel/MDBX-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MDBX-7B-Q2_K.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [MDBX-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [MDBX-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [MDBX-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [MDBX-7B-Q4_0.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MDBX-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [MDBX-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [MDBX-7B-Q5_0.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MDBX-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [MDBX-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [MDBX-7B-Q6_K.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [MDBX-7B-Q8_0.gguf](https://huggingface.co/tensorblock/MDBX-7B-GGUF/blob/main/MDBX-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MDBX-7B-GGUF --include "MDBX-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MDBX-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Argetsu-GGUF | tensorblock | 2025-04-21T00:30:51Z | 37 | 0 | null | [
"gguf",
"mistral",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:Azazelle/Argetsu",
"base_model:quantized:Azazelle/Argetsu",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-18T11:59:10Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
- TensorBlock
- GGUF
license: cc-by-4.0
base_model: Azazelle/Argetsu
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Azazelle/Argetsu - GGUF
This repo contains GGUF format model files for [Azazelle/Argetsu](https://huggingface.co/Azazelle/Argetsu).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Argetsu-Q2_K.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Argetsu-Q3_K_S.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Argetsu-Q3_K_M.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Argetsu-Q3_K_L.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Argetsu-Q4_0.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Argetsu-Q4_K_S.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Argetsu-Q4_K_M.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Argetsu-Q5_0.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Argetsu-Q5_K_S.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Argetsu-Q5_K_M.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Argetsu-Q6_K.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Argetsu-Q8_0.gguf](https://huggingface.co/tensorblock/Argetsu-GGUF/blob/main/Argetsu-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Argetsu-GGUF --include "Argetsu-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Argetsu-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/firefly-mixtral-8x7b-GGUF | tensorblock | 2025-04-21T00:30:50Z | 20 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:YeungNLP/firefly-mixtral-8x7b",
"base_model:quantized:YeungNLP/firefly-mixtral-8x7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-18T10:51:26Z | ---
license: apache-2.0
language:
- en
base_model: YeungNLP/firefly-mixtral-8x7b
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## YeungNLP/firefly-mixtral-8x7b - GGUF
This repo contains GGUF format model files for [YeungNLP/firefly-mixtral-8x7b](https://huggingface.co/YeungNLP/firefly-mixtral-8x7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [firefly-mixtral-8x7b-Q2_K.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [firefly-mixtral-8x7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [firefly-mixtral-8x7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [firefly-mixtral-8x7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [firefly-mixtral-8x7b-Q4_0.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [firefly-mixtral-8x7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [firefly-mixtral-8x7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [firefly-mixtral-8x7b-Q5_0.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [firefly-mixtral-8x7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [firefly-mixtral-8x7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [firefly-mixtral-8x7b-Q6_K.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [firefly-mixtral-8x7b-Q8_0.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/firefly-mixtral-8x7b-GGUF --include "firefly-mixtral-8x7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/firefly-mixtral-8x7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mediquad-4x7b-GGUF | tensorblock | 2025-04-21T00:30:47Z | 28 | 0 | null | [
"gguf",
"moe",
"merge",
"epfl-llm/meditron-7b",
"chaoyi-wu/PMC_LLAMA_7B_10_epoch",
"allenai/tulu-2-dpo-7b",
"microsoft/Orca-2-7b",
"TensorBlock",
"GGUF",
"base_model:Technoculture/Mediquad-4x7b",
"base_model:quantized:Technoculture/Mediquad-4x7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-18T08:22:36Z | ---
license: apache-2.0
tags:
- moe
- merge
- epfl-llm/meditron-7b
- chaoyi-wu/PMC_LLAMA_7B_10_epoch
- allenai/tulu-2-dpo-7b
- microsoft/Orca-2-7b
- TensorBlock
- GGUF
base_model: Technoculture/Mediquad-4x7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Technoculture/Mediquad-4x7b - GGUF
This repo contains GGUF format model files for [Technoculture/Mediquad-4x7b](https://huggingface.co/Technoculture/Mediquad-4x7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mediquad-4x7b-Q2_K.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q2_K.gguf) | Q2_K | 7.235 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mediquad-4x7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q3_K_S.gguf) | Q3_K_S | 8.530 GB | very small, high quality loss |
| [Mediquad-4x7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q3_K_M.gguf) | Q3_K_M | 9.489 GB | very small, high quality loss |
| [Mediquad-4x7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q3_K_L.gguf) | Q3_K_L | 10.295 GB | small, substantial quality loss |
| [Mediquad-4x7b-Q4_0.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q4_0.gguf) | Q4_0 | 11.132 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mediquad-4x7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q4_K_S.gguf) | Q4_K_S | 11.231 GB | small, greater quality loss |
| [Mediquad-4x7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q4_K_M.gguf) | Q4_K_M | 11.945 GB | medium, balanced quality - recommended |
| [Mediquad-4x7b-Q5_0.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q5_0.gguf) | Q5_0 | 13.581 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mediquad-4x7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q5_K_S.gguf) | Q5_K_S | 13.581 GB | large, low quality loss - recommended |
| [Mediquad-4x7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q5_K_M.gguf) | Q5_K_M | 14.000 GB | large, very low quality loss - recommended |
| [Mediquad-4x7b-Q6_K.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q6_K.gguf) | Q6_K | 16.184 GB | very large, extremely low quality loss |
| [Mediquad-4x7b-Q8_0.gguf](https://huggingface.co/tensorblock/Mediquad-4x7b-GGUF/blob/main/Mediquad-4x7b-Q8_0.gguf) | Q8_0 | 20.960 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mediquad-4x7b-GGUF --include "Mediquad-4x7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mediquad-4x7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
psresearch/Relation-extraction-Deberta-v3-large | psresearch | 2025-04-21T00:30:47Z | 0 | 0 | transformers | [
"transformers",
"Academic",
"Scholarly",
"text-classification",
"en",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-20T23:47:11Z | ---
license: mit
language:
- en
metrics:
- accuracy
base_model:
- microsoft/deberta-v3-large
pipeline_tag: text-classification
library_name: transformers
tags:
- Academic
- Scholarly
--- |
tensorblock/megatron_1.1_MoE_2x7B-GGUF | tensorblock | 2025-04-21T00:30:43Z | 41 | 0 | null | [
"gguf",
"frankenmoe",
"merge",
"MoE",
"Mixtral",
"TensorBlock",
"GGUF",
"base_model:Eurdem/megatron_1.1_MoE_2x7B",
"base_model:quantized:Eurdem/megatron_1.1_MoE_2x7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-18T06:15:54Z | ---
license: apache-2.0
tags:
- frankenmoe
- merge
- MoE
- Mixtral
- TensorBlock
- GGUF
base_model: Eurdem/megatron_1.1_MoE_2x7B
model-index:
- name: megatron_1.1_MoE_2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_1.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_1.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_1.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_1.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_1.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_1.1_MoE_2x7B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Eurdem/megatron_1.1_MoE_2x7B - GGUF
This repo contains GGUF format model files for [Eurdem/megatron_1.1_MoE_2x7B](https://huggingface.co/Eurdem/megatron_1.1_MoE_2x7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>GPT4 Correct System: {system_prompt}<|end_of_turn|>GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [megatron_1.1_MoE_2x7B-Q2_K.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q2_K.gguf) | Q2_K | 4.761 GB | smallest, significant quality loss - not recommended for most purposes |
| [megatron_1.1_MoE_2x7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q3_K_S.gguf) | Q3_K_S | 5.588 GB | very small, high quality loss |
| [megatron_1.1_MoE_2x7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q3_K_M.gguf) | Q3_K_M | 6.207 GB | very small, high quality loss |
| [megatron_1.1_MoE_2x7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q3_K_L.gguf) | Q3_K_L | 6.730 GB | small, substantial quality loss |
| [megatron_1.1_MoE_2x7B-Q4_0.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q4_0.gguf) | Q4_0 | 7.281 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [megatron_1.1_MoE_2x7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q4_K_S.gguf) | Q4_K_S | 7.342 GB | small, greater quality loss |
| [megatron_1.1_MoE_2x7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q4_K_M.gguf) | Q4_K_M | 7.783 GB | medium, balanced quality - recommended |
| [megatron_1.1_MoE_2x7B-Q5_0.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q5_0.gguf) | Q5_0 | 8.874 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [megatron_1.1_MoE_2x7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q5_K_S.gguf) | Q5_K_S | 8.874 GB | large, low quality loss - recommended |
| [megatron_1.1_MoE_2x7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q5_K_M.gguf) | Q5_K_M | 9.133 GB | large, very low quality loss - recommended |
| [megatron_1.1_MoE_2x7B-Q6_K.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q6_K.gguf) | Q6_K | 10.567 GB | very large, extremely low quality loss |
| [megatron_1.1_MoE_2x7B-Q8_0.gguf](https://huggingface.co/tensorblock/megatron_1.1_MoE_2x7B-GGUF/blob/main/megatron_1.1_MoE_2x7B-Q8_0.gguf) | Q8_0 | 13.686 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/megatron_1.1_MoE_2x7B-GGUF --include "megatron_1.1_MoE_2x7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/megatron_1.1_MoE_2x7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/quantum-v0.01-GGUF | tensorblock | 2025-04-21T00:30:41Z | 39 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:quantumaikr/quantum-v0.01",
"base_model:quantized:quantumaikr/quantum-v0.01",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-18T05:32:03Z | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: quantumaikr/quantum-v0.01
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## quantumaikr/quantum-v0.01 - GGUF
This repo contains GGUF format model files for [quantumaikr/quantum-v0.01](https://huggingface.co/quantumaikr/quantum-v0.01).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [quantum-v0.01-Q2_K.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [quantum-v0.01-Q3_K_S.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [quantum-v0.01-Q3_K_M.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [quantum-v0.01-Q3_K_L.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [quantum-v0.01-Q4_0.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [quantum-v0.01-Q4_K_S.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [quantum-v0.01-Q4_K_M.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [quantum-v0.01-Q5_0.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [quantum-v0.01-Q5_K_S.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [quantum-v0.01-Q5_K_M.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [quantum-v0.01-Q6_K.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [quantum-v0.01-Q8_0.gguf](https://huggingface.co/tensorblock/quantum-v0.01-GGUF/blob/main/quantum-v0.01-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/quantum-v0.01-GGUF --include "quantum-v0.01-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/quantum-v0.01-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/NexoNimbus-7B-GGUF | tensorblock | 2025-04-21T00:30:31Z | 60 | 0 | null | [
"gguf",
"merge",
"abideen/DareVox-7B",
"udkai/Garrulus",
"TensorBlock",
"GGUF",
"en",
"base_model:abideen/NexoNimbus-7B",
"base_model:quantized:abideen/NexoNimbus-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-18T01:53:15Z | ---
license: apache-2.0
tags:
- merge
- abideen/DareVox-7B
- udkai/Garrulus
- TensorBlock
- GGUF
language:
- en
base_model: abideen/NexoNimbus-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## abideen/NexoNimbus-7B - GGUF
This repo contains GGUF format model files for [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NexoNimbus-7B-Q2_K.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [NexoNimbus-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [NexoNimbus-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [NexoNimbus-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [NexoNimbus-7B-Q4_0.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NexoNimbus-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [NexoNimbus-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [NexoNimbus-7B-Q5_0.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NexoNimbus-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [NexoNimbus-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [NexoNimbus-7B-Q6_K.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [NexoNimbus-7B-Q8_0.gguf](https://huggingface.co/tensorblock/NexoNimbus-7B-GGUF/blob/main/NexoNimbus-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/NexoNimbus-7B-GGUF --include "NexoNimbus-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/NexoNimbus-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/NeuralMarcoro14-7B-GGUF | tensorblock | 2025-04-21T00:30:24Z | 50 | 0 | null | [
"gguf",
"mlabonne/Marcoro14-7B-slerp",
"dpo",
"rlhf",
"merge",
"mergekit",
"lazymergekit",
"TensorBlock",
"GGUF",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:mlabonne/NeuralMarcoro14-7B",
"base_model:quantized:mlabonne/NeuralMarcoro14-7B",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-17T23:57:59Z | ---
license: cc-by-nc-4.0
tags:
- mlabonne/Marcoro14-7B-slerp
- dpo
- rlhf
- merge
- mergekit
- lazymergekit
- TensorBlock
- GGUF
datasets:
- mlabonne/chatml_dpo_pairs
base_model: mlabonne/NeuralMarcoro14-7B
model-index:
- name: NeuralMarcoro14-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mlabonne/NeuralMarcoro14-7B - GGUF
This repo contains GGUF format model files for [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NeuralMarcoro14-7B-Q2_K.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [NeuralMarcoro14-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [NeuralMarcoro14-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [NeuralMarcoro14-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [NeuralMarcoro14-7B-Q4_0.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NeuralMarcoro14-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [NeuralMarcoro14-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [NeuralMarcoro14-7B-Q5_0.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NeuralMarcoro14-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [NeuralMarcoro14-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [NeuralMarcoro14-7B-Q6_K.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [NeuralMarcoro14-7B-Q8_0.gguf](https://huggingface.co/tensorblock/NeuralMarcoro14-7B-GGUF/blob/main/NeuralMarcoro14-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/NeuralMarcoro14-7B-GGUF --include "NeuralMarcoro14-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/NeuralMarcoro14-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Yi-Ko-6B-dpo-v4-GGUF | tensorblock | 2025-04-21T00:30:19Z | 51 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:GAI-LLM/Yi-Ko-6B-dpo-v4",
"base_model:quantized:GAI-LLM/Yi-Ko-6B-dpo-v4",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T23:11:54Z | ---
license: cc-by-nc-4.0
tags:
- TensorBlock
- GGUF
base_model: GAI-LLM/Yi-Ko-6B-dpo-v4
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## GAI-LLM/Yi-Ko-6B-dpo-v4 - GGUF
This repo contains GGUF format model files for [GAI-LLM/Yi-Ko-6B-dpo-v4](https://huggingface.co/GAI-LLM/Yi-Ko-6B-dpo-v4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-Ko-6B-dpo-v4-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q2_K.gguf) | Q2_K | 2.405 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yi-Ko-6B-dpo-v4-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q3_K_S.gguf) | Q3_K_S | 2.784 GB | very small, high quality loss |
| [Yi-Ko-6B-dpo-v4-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q3_K_M.gguf) | Q3_K_M | 3.067 GB | very small, high quality loss |
| [Yi-Ko-6B-dpo-v4-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q3_K_L.gguf) | Q3_K_L | 3.311 GB | small, substantial quality loss |
| [Yi-Ko-6B-dpo-v4-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q4_0.gguf) | Q4_0 | 3.562 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-Ko-6B-dpo-v4-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q4_K_S.gguf) | Q4_K_S | 3.585 GB | small, greater quality loss |
| [Yi-Ko-6B-dpo-v4-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q4_K_M.gguf) | Q4_K_M | 3.756 GB | medium, balanced quality - recommended |
| [Yi-Ko-6B-dpo-v4-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q5_0.gguf) | Q5_0 | 4.294 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-Ko-6B-dpo-v4-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q5_K_S.gguf) | Q5_K_S | 4.294 GB | large, low quality loss - recommended |
| [Yi-Ko-6B-dpo-v4-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q5_K_M.gguf) | Q5_K_M | 4.394 GB | large, very low quality loss - recommended |
| [Yi-Ko-6B-dpo-v4-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q6_K.gguf) | Q6_K | 5.072 GB | very large, extremely low quality loss |
| [Yi-Ko-6B-dpo-v4-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-Ko-6B-dpo-v4-GGUF/blob/main/Yi-Ko-6B-dpo-v4-Q8_0.gguf) | Q8_0 | 6.568 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Yi-Ko-6B-dpo-v4-GGUF --include "Yi-Ko-6B-dpo-v4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Yi-Ko-6B-dpo-v4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
TareksTesting/Alkahest-V9.1-LLaMa-70B | TareksTesting | 2025-04-21T00:30:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:TareksLab/Dungeons-and-Dragons-V3-LLaMa-70B",
"base_model:merge:TareksLab/Dungeons-and-Dragons-V3-LLaMa-70B",
"base_model:TareksLab/Stylizer-Dark-V1-LLaMa-70B",
"base_model:merge:TareksLab/Stylizer-Dark-V1-LLaMa-70B",
"base_model:TareksLab/Wordsmith-V17-LLaMa-70B",
"base_model:merge:TareksLab/Wordsmith-V17-LLaMa-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-20T23:58:49Z | ---
base_model:
- TareksLab/Dungeons-and-Dragons-V3-LLaMa-70B
- TareksLab/Stylizer-Dark-V1-LLaMa-70B
- TareksLab/Wordsmith-V17-LLaMa-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/Wordsmith-V17-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V17-LLaMa-70B) as a base.
### Models Merged
The following models were included in the merge:
* [TareksLab/Dungeons-and-Dragons-V3-LLaMa-70B](https://huggingface.co/TareksLab/Dungeons-and-Dragons-V3-LLaMa-70B)
* [TareksLab/Stylizer-Dark-V1-LLaMa-70B](https://huggingface.co/TareksLab/Stylizer-Dark-V1-LLaMa-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksLab/Stylizer-Dark-V1-LLaMa-70B
parameters:
weight: 0.33
density: 0.5
- model: TareksLab/Wordsmith-V17-LLaMa-70B
parameters:
weight: 0.34
density: 0.5
- model: TareksLab/Dungeons-and-Dragons-V3-LLaMa-70B
parameters:
weight: 0.33
density: 0.5
merge_method: dare_ties
base_model: TareksLab/Wordsmith-V17-LLaMa-70B
parameters:
normalize: false
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: TareksLab/Wordsmith-V17-LLaMa-70B
```
|
tensorblock/mistral-7b-GGUF | tensorblock | 2025-04-21T00:30:04Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"mistral",
"mistral7b",
"bnb",
"TensorBlock",
"GGUF",
"en",
"base_model:unsloth/mistral-7b",
"base_model:quantized:unsloth/mistral-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T20:00:43Z | ---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
- mistral
- mistral7b
- bnb
- TensorBlock
- GGUF
base_model: unsloth/mistral-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## unsloth/mistral-7b - GGUF
This repo contains GGUF format model files for [unsloth/mistral-7b](https://huggingface.co/unsloth/mistral-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mistral-7b-Q2_K.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [mistral-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [mistral-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [mistral-7b-Q4_0.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [mistral-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [mistral-7b-Q5_0.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [mistral-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [mistral-7b-Q6_K.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [mistral-7b-Q8_0.gguf](https://huggingface.co/tensorblock/mistral-7b-GGUF/blob/main/mistral-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mistral-7b-GGUF --include "mistral-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mistral-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/StopCarbon-10.7B-v1-GGUF | tensorblock | 2025-04-21T00:29:59Z | 47 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"en",
"base_model:kekmodel/StopCarbon-10.7B-v1",
"base_model:quantized:kekmodel/StopCarbon-10.7B-v1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-17T19:08:02Z | ---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
- TensorBlock
- GGUF
base_model: kekmodel/StopCarbon-10.7B-v1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## kekmodel/StopCarbon-10.7B-v1 - GGUF
This repo contains GGUF format model files for [kekmodel/StopCarbon-10.7B-v1](https://huggingface.co/kekmodel/StopCarbon-10.7B-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [StopCarbon-10.7B-v1-Q2_K.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [StopCarbon-10.7B-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [StopCarbon-10.7B-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [StopCarbon-10.7B-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [StopCarbon-10.7B-v1-Q4_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [StopCarbon-10.7B-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [StopCarbon-10.7B-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [StopCarbon-10.7B-v1-Q5_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [StopCarbon-10.7B-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [StopCarbon-10.7B-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [StopCarbon-10.7B-v1-Q6_K.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [StopCarbon-10.7B-v1-Q8_0.gguf](https://huggingface.co/tensorblock/StopCarbon-10.7B-v1-GGUF/blob/main/StopCarbon-10.7B-v1-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/StopCarbon-10.7B-v1-GGUF --include "StopCarbon-10.7B-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/StopCarbon-10.7B-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF | tensorblock | 2025-04-21T00:29:58Z | 67 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Capybara",
"base_model:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"base_model:quantized:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-17T18:49:53Z | ---
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
language:
- en
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser - GGUF
This repo contains GGUF format model files for [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [dolphin-2.6-mistral-7b-dpo-laser-Q2_K.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [dolphin-2.6-mistral-7b-dpo-laser-Q3_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [dolphin-2.6-mistral-7b-dpo-laser-Q3_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [dolphin-2.6-mistral-7b-dpo-laser-Q3_K_L.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [dolphin-2.6-mistral-7b-dpo-laser-Q4_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dolphin-2.6-mistral-7b-dpo-laser-Q4_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [dolphin-2.6-mistral-7b-dpo-laser-Q4_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [dolphin-2.6-mistral-7b-dpo-laser-Q5_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dolphin-2.6-mistral-7b-dpo-laser-Q5_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [dolphin-2.6-mistral-7b-dpo-laser-Q5_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [dolphin-2.6-mistral-7b-dpo-laser-Q6_K.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [dolphin-2.6-mistral-7b-dpo-laser-Q8_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF/blob/main/dolphin-2.6-mistral-7b-dpo-laser-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF --include "dolphin-2.6-mistral-7b-dpo-laser-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/dolphin-2.6-mistral-7b-dpo-laser-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/NeuralPipe-7B-slerp-GGUF | tensorblock | 2025-04-21T00:29:56Z | 39 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"TensorBlock",
"GGUF",
"base_model:DeepKarkhanis/NeuralPipe-7B-slerp",
"base_model:quantized:DeepKarkhanis/NeuralPipe-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T18:11:53Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
- TensorBlock
- GGUF
base_model: DeepKarkhanis/NeuralPipe-7B-slerp
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## DeepKarkhanis/NeuralPipe-7B-slerp - GGUF
This repo contains GGUF format model files for [DeepKarkhanis/NeuralPipe-7B-slerp](https://huggingface.co/DeepKarkhanis/NeuralPipe-7B-slerp).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NeuralPipe-7B-slerp-Q2_K.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [NeuralPipe-7B-slerp-Q3_K_S.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [NeuralPipe-7B-slerp-Q3_K_M.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [NeuralPipe-7B-slerp-Q3_K_L.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [NeuralPipe-7B-slerp-Q4_0.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NeuralPipe-7B-slerp-Q4_K_S.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [NeuralPipe-7B-slerp-Q4_K_M.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [NeuralPipe-7B-slerp-Q5_0.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NeuralPipe-7B-slerp-Q5_K_S.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [NeuralPipe-7B-slerp-Q5_K_M.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [NeuralPipe-7B-slerp-Q6_K.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [NeuralPipe-7B-slerp-Q8_0.gguf](https://huggingface.co/tensorblock/NeuralPipe-7B-slerp-GGUF/blob/main/NeuralPipe-7B-slerp-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/NeuralPipe-7B-slerp-GGUF --include "NeuralPipe-7B-slerp-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/NeuralPipe-7B-slerp-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/FusionNet_SOLAR-GGUF | tensorblock | 2025-04-21T00:29:51Z | 50 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:TomGrc/FusionNet_SOLAR",
"base_model:quantized:TomGrc/FusionNet_SOLAR",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-17T16:28:14Z | ---
language:
- en
license: mit
pipeline_tag: text-generation
base_model: TomGrc/FusionNet_SOLAR
tags:
- TensorBlock
- GGUF
model-index:
- name: FusionNet_SOLAR
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.21
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## TomGrc/FusionNet_SOLAR - GGUF
This repo contains GGUF format model files for [TomGrc/FusionNet_SOLAR](https://huggingface.co/TomGrc/FusionNet_SOLAR).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [FusionNet_SOLAR-Q2_K.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q2_K.gguf) | Q2_K | 5.929 GB | smallest, significant quality loss - not recommended for most purposes |
| [FusionNet_SOLAR-Q3_K_S.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q3_K_S.gguf) | Q3_K_S | 6.915 GB | very small, high quality loss |
| [FusionNet_SOLAR-Q3_K_M.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q3_K_M.gguf) | Q3_K_M | 7.707 GB | very small, high quality loss |
| [FusionNet_SOLAR-Q3_K_L.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q3_K_L.gguf) | Q3_K_L | 8.394 GB | small, substantial quality loss |
| [FusionNet_SOLAR-Q4_0.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q4_0.gguf) | Q4_0 | 9.018 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [FusionNet_SOLAR-Q4_K_S.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q4_K_S.gguf) | Q4_K_S | 9.086 GB | small, greater quality loss |
| [FusionNet_SOLAR-Q4_K_M.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q4_K_M.gguf) | Q4_K_M | 9.602 GB | medium, balanced quality - recommended |
| [FusionNet_SOLAR-Q5_0.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q5_0.gguf) | Q5_0 | 10.997 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [FusionNet_SOLAR-Q5_K_S.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q5_K_S.gguf) | Q5_K_S | 10.997 GB | large, low quality loss - recommended |
| [FusionNet_SOLAR-Q5_K_M.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q5_K_M.gguf) | Q5_K_M | 11.298 GB | large, very low quality loss - recommended |
| [FusionNet_SOLAR-Q6_K.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q6_K.gguf) | Q6_K | 13.100 GB | very large, extremely low quality loss |
| [FusionNet_SOLAR-Q8_0.gguf](https://huggingface.co/tensorblock/FusionNet_SOLAR-GGUF/blob/main/FusionNet_SOLAR-Q8_0.gguf) | Q8_0 | 16.967 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/FusionNet_SOLAR-GGUF --include "FusionNet_SOLAR-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/FusionNet_SOLAR-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/7Bx4_DPO_2e-GGUF | tensorblock | 2025-04-21T00:29:48Z | 29 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:yunconglong/7Bx4_DPO_2e",
"base_model:quantized:yunconglong/7Bx4_DPO_2e",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T15:56:55Z | ---
license: mit
tags:
- TensorBlock
- GGUF
base_model: yunconglong/7Bx4_DPO_2e
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## yunconglong/7Bx4_DPO_2e - GGUF
This repo contains GGUF format model files for [yunconglong/7Bx4_DPO_2e](https://huggingface.co/yunconglong/7Bx4_DPO_2e).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [7Bx4_DPO_2e-Q2_K.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q2_K.gguf) | Q2_K | 8.843 GB | smallest, significant quality loss - not recommended for most purposes |
| [7Bx4_DPO_2e-Q3_K_S.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q3_K_S.gguf) | Q3_K_S | 10.433 GB | very small, high quality loss |
| [7Bx4_DPO_2e-Q3_K_M.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q3_K_M.gguf) | Q3_K_M | 11.580 GB | very small, high quality loss |
| [7Bx4_DPO_2e-Q3_K_L.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q3_K_L.gguf) | Q3_K_L | 12.544 GB | small, substantial quality loss |
| [7Bx4_DPO_2e-Q4_0.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q4_0.gguf) | Q4_0 | 13.624 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [7Bx4_DPO_2e-Q4_K_S.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q4_K_S.gguf) | Q4_K_S | 13.743 GB | small, greater quality loss |
| [7Bx4_DPO_2e-Q4_K_M.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q4_K_M.gguf) | Q4_K_M | 14.610 GB | medium, balanced quality - recommended |
| [7Bx4_DPO_2e-Q5_0.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q5_0.gguf) | Q5_0 | 16.626 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [7Bx4_DPO_2e-Q5_K_S.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q5_K_S.gguf) | Q5_K_S | 16.626 GB | large, low quality loss - recommended |
| [7Bx4_DPO_2e-Q5_K_M.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q5_K_M.gguf) | Q5_K_M | 17.134 GB | large, very low quality loss - recommended |
| [7Bx4_DPO_2e-Q6_K.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q6_K.gguf) | Q6_K | 19.817 GB | very large, extremely low quality loss |
| [7Bx4_DPO_2e-Q8_0.gguf](https://huggingface.co/tensorblock/7Bx4_DPO_2e-GGUF/blob/main/7Bx4_DPO_2e-Q8_0.gguf) | Q8_0 | 25.666 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/7Bx4_DPO_2e-GGUF --include "7Bx4_DPO_2e-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/7Bx4_DPO_2e-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mistral-7B-Claim-Extractor-GGUF | tensorblock | 2025-04-21T00:29:45Z | 45 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:dongyru/Mistral-7B-Claim-Extractor",
"base_model:quantized:dongyru/Mistral-7B-Claim-Extractor",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-17T14:54:00Z | ---
license: apache-2.0
base_model: dongyru/Mistral-7B-Claim-Extractor
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## dongyru/Mistral-7B-Claim-Extractor - GGUF
This repo contains GGUF format model files for [dongyru/Mistral-7B-Claim-Extractor](https://huggingface.co/dongyru/Mistral-7B-Claim-Extractor).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-Claim-Extractor-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-Claim-Extractor-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-Claim-Extractor-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-Claim-Extractor-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-Claim-Extractor-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-Claim-Extractor-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-Claim-Extractor-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-Claim-Extractor-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-Claim-Extractor-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-Claim-Extractor-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-Claim-Extractor-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-Claim-Extractor-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Claim-Extractor-GGUF/blob/main/Mistral-7B-Claim-Extractor-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-7B-Claim-Extractor-GGUF --include "Mistral-7B-Claim-Extractor-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-7B-Claim-Extractor-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/dolphin-2.6-mistral-7b-GGUF | tensorblock | 2025-04-21T00:29:44Z | 81 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Capybara",
"base_model:cognitivecomputations/dolphin-2.6-mistral-7b",
"base_model:quantized:cognitivecomputations/dolphin-2.6-mistral-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T14:11:13Z | ---
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
language:
- en
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: cognitivecomputations/dolphin-2.6-mistral-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cognitivecomputations/dolphin-2.6-mistral-7b - GGUF
This repo contains GGUF format model files for [cognitivecomputations/dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [dolphin-2.6-mistral-7b-Q2_K.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [dolphin-2.6-mistral-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [dolphin-2.6-mistral-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [dolphin-2.6-mistral-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [dolphin-2.6-mistral-7b-Q4_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dolphin-2.6-mistral-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [dolphin-2.6-mistral-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [dolphin-2.6-mistral-7b-Q5_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dolphin-2.6-mistral-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [dolphin-2.6-mistral-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [dolphin-2.6-mistral-7b-Q6_K.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [dolphin-2.6-mistral-7b-Q8_0.gguf](https://huggingface.co/tensorblock/dolphin-2.6-mistral-7b-GGUF/blob/main/dolphin-2.6-mistral-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/dolphin-2.6-mistral-7b-GGUF --include "dolphin-2.6-mistral-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/dolphin-2.6-mistral-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF | tensorblock | 2025-04-21T00:29:42Z | 27 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr",
"base_model:quantized:vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T13:24:17Z | ---
base_model: vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr - GGUF
This repo contains GGUF format model files for [vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr](https://huggingface.co/vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q2_K.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q2_K.gguf) | Q2_K | 2.632 GB | smallest, significant quality loss - not recommended for most purposes |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q3_K_S.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q3_K_S.gguf) | Q3_K_S | 3.035 GB | very small, high quality loss |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q3_K_M.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q3_K_M.gguf) | Q3_K_M | 3.622 GB | very small, high quality loss |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q3_K_L.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q3_K_L.gguf) | Q3_K_L | 3.941 GB | small, substantial quality loss |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q4_0.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q4_0.gguf) | Q4_0 | 3.918 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q4_K_S.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q4_K_S.gguf) | Q4_K_S | 3.952 GB | small, greater quality loss |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q4_K_M.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q4_K_M.gguf) | Q4_K_M | 4.396 GB | medium, balanced quality - recommended |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q5_0.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q5_0.gguf) | Q5_0 | 4.749 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q5_K_S.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q5_K_S.gguf) | Q5_K_S | 4.749 GB | large, low quality loss - recommended |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q5_K_M.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q5_K_M.gguf) | Q5_K_M | 5.106 GB | large, very low quality loss - recommended |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q6_K.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q6_K.gguf) | Q6_K | 5.632 GB | very large, extremely low quality loss |
| [EleutherAI_pythia-6.9b-deduped__sft__tldr-Q8_0.gguf](https://huggingface.co/tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF/blob/main/EleutherAI_pythia-6.9b-deduped__sft__tldr-Q8_0.gguf) | Q8_0 | 7.293 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF --include "EleutherAI_pythia-6.9b-deduped__sft__tldr-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/EleutherAI_pythia-6.9b-deduped__sft__tldr-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF | tensorblock | 2025-04-21T00:29:37Z | 208 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"base_model:OpenBuddy/openbuddy-deepseek-10b-v17.1-4k",
"base_model:quantized:OpenBuddy/openbuddy-deepseek-10b-v17.1-4k",
"license:other",
"region:us"
] | text-generation | 2024-12-17T12:19:10Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL
base_model: OpenBuddy/openbuddy-deepseek-10b-v17.1-4k
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## OpenBuddy/openbuddy-deepseek-10b-v17.1-4k - GGUF
This repo contains GGUF format model files for [OpenBuddy/openbuddy-deepseek-10b-v17.1-4k](https://huggingface.co/OpenBuddy/openbuddy-deepseek-10b-v17.1-4k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [openbuddy-deepseek-10b-v17.1-4k-Q2_K.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q2_K.gguf) | Q2_K | 4.058 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-deepseek-10b-v17.1-4k-Q3_K_S.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q3_K_S.gguf) | Q3_K_S | 4.704 GB | very small, high quality loss |
| [openbuddy-deepseek-10b-v17.1-4k-Q3_K_M.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q3_K_M.gguf) | Q3_K_M | 5.226 GB | very small, high quality loss |
| [openbuddy-deepseek-10b-v17.1-4k-Q3_K_L.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q3_K_L.gguf) | Q3_K_L | 5.677 GB | small, substantial quality loss |
| [openbuddy-deepseek-10b-v17.1-4k-Q4_0.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q4_0.gguf) | Q4_0 | 6.050 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-deepseek-10b-v17.1-4k-Q4_K_S.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q4_K_S.gguf) | Q4_K_S | 6.092 GB | small, greater quality loss |
| [openbuddy-deepseek-10b-v17.1-4k-Q4_K_M.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q4_K_M.gguf) | Q4_K_M | 6.432 GB | medium, balanced quality - recommended |
| [openbuddy-deepseek-10b-v17.1-4k-Q5_0.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q5_0.gguf) | Q5_0 | 7.316 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-deepseek-10b-v17.1-4k-Q5_K_S.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q5_K_S.gguf) | Q5_K_S | 7.316 GB | large, low quality loss - recommended |
| [openbuddy-deepseek-10b-v17.1-4k-Q5_K_M.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q5_K_M.gguf) | Q5_K_M | 7.514 GB | large, very low quality loss - recommended |
| [openbuddy-deepseek-10b-v17.1-4k-Q6_K.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q6_K.gguf) | Q6_K | 8.662 GB | very large, extremely low quality loss |
| [openbuddy-deepseek-10b-v17.1-4k-Q8_0.gguf](https://huggingface.co/tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF/blob/main/openbuddy-deepseek-10b-v17.1-4k-Q8_0.gguf) | Q8_0 | 11.218 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF --include "openbuddy-deepseek-10b-v17.1-4k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/openbuddy-deepseek-10b-v17.1-4k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CatMacaroni-Slerp-GGUF | tensorblock | 2025-04-21T00:29:35Z | 28 | 0 | null | [
"gguf",
"merge",
"TensorBlock",
"GGUF",
"base_model:cookinai/CatMacaroni-Slerp",
"base_model:quantized:cookinai/CatMacaroni-Slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T12:03:55Z | ---
license: apache-2.0
tags:
- merge
- TensorBlock
- GGUF
base_model: cookinai/CatMacaroni-Slerp
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## cookinai/CatMacaroni-Slerp - GGUF
This repo contains GGUF format model files for [cookinai/CatMacaroni-Slerp](https://huggingface.co/cookinai/CatMacaroni-Slerp).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CatMacaroni-Slerp-Q2_K.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [CatMacaroni-Slerp-Q3_K_S.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [CatMacaroni-Slerp-Q3_K_M.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [CatMacaroni-Slerp-Q3_K_L.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [CatMacaroni-Slerp-Q4_0.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [CatMacaroni-Slerp-Q4_K_S.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [CatMacaroni-Slerp-Q4_K_M.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [CatMacaroni-Slerp-Q5_0.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [CatMacaroni-Slerp-Q5_K_S.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [CatMacaroni-Slerp-Q5_K_M.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [CatMacaroni-Slerp-Q6_K.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [CatMacaroni-Slerp-Q8_0.gguf](https://huggingface.co/tensorblock/CatMacaroni-Slerp-GGUF/blob/main/CatMacaroni-Slerp-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CatMacaroni-Slerp-GGUF --include "CatMacaroni-Slerp-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CatMacaroni-Slerp-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/rizla-17-GGUF | tensorblock | 2025-04-21T00:29:34Z | 26 | 0 | null | [
"gguf",
"dpo",
"merge",
"mergekit",
"TensorBlock",
"GGUF",
"base_model:rizla/rizla-17",
"base_model:quantized:rizla/rizla-17",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T11:05:48Z | ---
license: cc-by-nc-nd-4.0
base_model: rizla/rizla-17
tags:
- dpo
- merge
- mergekit
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## rizla/rizla-17 - GGUF
This repo contains GGUF format model files for [rizla/rizla-17](https://huggingface.co/rizla/rizla-17).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [rizla-17-Q2_K.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q2_K.gguf) | Q2_K | 5.769 GB | smallest, significant quality loss - not recommended for most purposes |
| [rizla-17-Q3_K_S.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q3_K_S.gguf) | Q3_K_S | 6.774 GB | very small, high quality loss |
| [rizla-17-Q3_K_M.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q3_K_M.gguf) | Q3_K_M | 7.522 GB | very small, high quality loss |
| [rizla-17-Q3_K_L.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q3_K_L.gguf) | Q3_K_L | 8.166 GB | small, substantial quality loss |
| [rizla-17-Q4_0.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q4_0.gguf) | Q4_0 | 8.834 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [rizla-17-Q4_K_S.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q4_K_S.gguf) | Q4_K_S | 8.895 GB | small, greater quality loss |
| [rizla-17-Q4_K_M.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q4_K_M.gguf) | Q4_K_M | 9.430 GB | medium, balanced quality - recommended |
| [rizla-17-Q5_0.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q5_0.gguf) | Q5_0 | 10.772 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [rizla-17-Q5_K_S.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q5_K_S.gguf) | Q5_K_S | 10.772 GB | large, low quality loss - recommended |
| [rizla-17-Q5_K_M.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q5_K_M.gguf) | Q5_K_M | 11.079 GB | large, very low quality loss - recommended |
| [rizla-17-Q6_K.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q6_K.gguf) | Q6_K | 12.832 GB | very large, extremely low quality loss |
| [rizla-17-Q8_0.gguf](https://huggingface.co/tensorblock/rizla-17-GGUF/blob/main/rizla-17-Q8_0.gguf) | Q8_0 | 16.619 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/rizla-17-GGUF --include "rizla-17-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/rizla-17-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/EstopianMaid-13B-GGUF | tensorblock | 2025-04-21T00:29:31Z | 58 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"text-generation-inference",
"TensorBlock",
"GGUF",
"en",
"base_model:KatyTheCutie/EstopianMaid-13B",
"base_model:quantized:KatyTheCutie/EstopianMaid-13B",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T10:45:34Z | ---
language:
- en
library_name: transformers
tags:
- roleplay
- text-generation-inference
- TensorBlock
- GGUF
license: llama2
base_model: KatyTheCutie/EstopianMaid-13B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## KatyTheCutie/EstopianMaid-13B - GGUF
This repo contains GGUF format model files for [KatyTheCutie/EstopianMaid-13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [EstopianMaid-13B-Q2_K.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [EstopianMaid-13B-Q3_K_S.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [EstopianMaid-13B-Q3_K_M.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [EstopianMaid-13B-Q3_K_L.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [EstopianMaid-13B-Q4_0.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [EstopianMaid-13B-Q4_K_S.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [EstopianMaid-13B-Q4_K_M.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [EstopianMaid-13B-Q5_0.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [EstopianMaid-13B-Q5_K_S.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [EstopianMaid-13B-Q5_K_M.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [EstopianMaid-13B-Q6_K.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [EstopianMaid-13B-Q8_0.gguf](https://huggingface.co/tensorblock/EstopianMaid-13B-GGUF/blob/main/EstopianMaid-13B-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/EstopianMaid-13B-GGUF --include "EstopianMaid-13B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/EstopianMaid-13B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Abel-7B-002-GGUF | tensorblock | 2025-04-21T00:29:26Z | 26 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:GAIR/Abel-7B-002",
"base_model:quantized:GAIR/Abel-7B-002",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T09:29:10Z | ---
base_model: GAIR/Abel-7B-002
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## GAIR/Abel-7B-002 - GGUF
This repo contains GGUF format model files for [GAIR/Abel-7B-002](https://huggingface.co/GAIR/Abel-7B-002).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Abel-7B-002-Q2_K.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Abel-7B-002-Q3_K_S.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Abel-7B-002-Q3_K_M.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Abel-7B-002-Q3_K_L.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Abel-7B-002-Q4_0.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Abel-7B-002-Q4_K_S.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q4_K_S.gguf) | Q4_K_S | 4.141 GB | small, greater quality loss |
| [Abel-7B-002-Q4_K_M.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q4_K_M.gguf) | Q4_K_M | 4.369 GB | medium, balanced quality - recommended |
| [Abel-7B-002-Q5_0.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Abel-7B-002-Q5_K_S.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Abel-7B-002-Q5_K_M.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q5_K_M.gguf) | Q5_K_M | 5.132 GB | large, very low quality loss - recommended |
| [Abel-7B-002-Q6_K.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Abel-7B-002-Q8_0.gguf](https://huggingface.co/tensorblock/Abel-7B-002-GGUF/blob/main/Abel-7B-002-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Abel-7B-002-GGUF --include "Abel-7B-002-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Abel-7B-002-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF | tensorblock | 2025-04-21T00:29:21Z | 28 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Deathsquad10/TinyLlama-1.1B-Remix-V.2",
"base_model:quantized:Deathsquad10/TinyLlama-1.1B-Remix-V.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-17T03:15:16Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
tags:
- TensorBlock
- GGUF
base_model: Deathsquad10/TinyLlama-1.1B-Remix-V.2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Deathsquad10/TinyLlama-1.1B-Remix-V.2 - GGUF
This repo contains GGUF format model files for [Deathsquad10/TinyLlama-1.1B-Remix-V.2](https://huggingface.co/Deathsquad10/TinyLlama-1.1B-Remix-V.2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TinyLlama-1.1B-Remix-V.2-Q2_K.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q2_K.gguf) | Q2_K | 0.432 GB | smallest, significant quality loss - not recommended for most purposes |
| [TinyLlama-1.1B-Remix-V.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q3_K_S.gguf) | Q3_K_S | 0.499 GB | very small, high quality loss |
| [TinyLlama-1.1B-Remix-V.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q3_K_M.gguf) | Q3_K_M | 0.548 GB | very small, high quality loss |
| [TinyLlama-1.1B-Remix-V.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [TinyLlama-1.1B-Remix-V.2-Q4_0.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TinyLlama-1.1B-Remix-V.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q4_K_S.gguf) | Q4_K_S | 0.640 GB | small, greater quality loss |
| [TinyLlama-1.1B-Remix-V.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q4_K_M.gguf) | Q4_K_M | 0.668 GB | medium, balanced quality - recommended |
| [TinyLlama-1.1B-Remix-V.2-Q5_0.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q5_0.gguf) | Q5_0 | 0.766 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TinyLlama-1.1B-Remix-V.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q5_K_S.gguf) | Q5_K_S | 0.766 GB | large, low quality loss - recommended |
| [TinyLlama-1.1B-Remix-V.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q5_K_M.gguf) | Q5_K_M | 0.782 GB | large, very low quality loss - recommended |
| [TinyLlama-1.1B-Remix-V.2-Q6_K.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q6_K.gguf) | Q6_K | 0.903 GB | very large, extremely low quality loss |
| [TinyLlama-1.1B-Remix-V.2-Q8_0.gguf](https://huggingface.co/tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF/blob/main/TinyLlama-1.1B-Remix-V.2-Q8_0.gguf) | Q8_0 | 1.170 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF --include "TinyLlama-1.1B-Remix-V.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TinyLlama-1.1B-Remix-V.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Mistral-7B-golden-GGUF | tensorblock | 2025-04-21T00:29:19Z | 36 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:liuda1/Mistral-7B-golden",
"base_model:quantized:liuda1/Mistral-7B-golden",
"license:unknown",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T02:35:49Z | ---
license: unknown
base_model: liuda1/Mistral-7B-golden
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## liuda1/Mistral-7B-golden - GGUF
This repo contains GGUF format model files for [liuda1/Mistral-7B-golden](https://huggingface.co/liuda1/Mistral-7B-golden).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-golden-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-golden-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-golden-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-golden-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-golden-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-golden-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-golden-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-golden-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-golden-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-golden-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-golden-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-golden-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-golden-GGUF/blob/main/Mistral-7B-golden-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-7B-golden-GGUF --include "Mistral-7B-golden-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-7B-golden-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF | tensorblock | 2025-04-21T00:29:14Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"Yi",
"TensorBlock",
"GGUF",
"en",
"base_model:brucethemoose/Yi-34B-200K-DARE-megamerge-v8",
"base_model:quantized:brucethemoose/Yi-34B-200K-DARE-megamerge-v8",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-12-17T00:35:40Z | ---
language:
- en
license: other
library_name: transformers
tags:
- mergekit
- merge
- Yi
- TensorBlock
- GGUF
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
base_model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
model-index:
- name: Yi-34B-200K-DARE-megamerge-v8
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.31
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## brucethemoose/Yi-34B-200K-DARE-megamerge-v8 - GGUF
This repo contains GGUF format model files for [brucethemoose/Yi-34B-200K-DARE-megamerge-v8](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-34B-200K-DARE-megamerge-v8-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yi-34B-200K-DARE-megamerge-v8-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss |
| [Yi-34B-200K-DARE-megamerge-v8-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss |
| [Yi-34B-200K-DARE-megamerge-v8-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss |
| [Yi-34B-200K-DARE-megamerge-v8-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-34B-200K-DARE-megamerge-v8-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss |
| [Yi-34B-200K-DARE-megamerge-v8-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended |
| [Yi-34B-200K-DARE-megamerge-v8-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-34B-200K-DARE-megamerge-v8-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended |
| [Yi-34B-200K-DARE-megamerge-v8-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended |
| [Yi-34B-200K-DARE-megamerge-v8-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss |
| [Yi-34B-200K-DARE-megamerge-v8-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF/blob/main/Yi-34B-200K-DARE-megamerge-v8-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF --include "Yi-34B-200K-DARE-megamerge-v8-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Yi-34B-200K-DARE-megamerge-v8-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Pallas-0.5-LASER-0.4-GGUF | tensorblock | 2025-04-21T00:29:01Z | 26 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Mihaiii/Pallas-0.5-LASER-0.4",
"base_model:quantized:Mihaiii/Pallas-0.5-LASER-0.4",
"license:other",
"region:us"
] | null | 2024-12-17T00:03:34Z | ---
base_model: Mihaiii/Pallas-0.5-LASER-0.4
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Mihaiii/Pallas-0.5-LASER-0.4 - GGUF
This repo contains GGUF format model files for [Mihaiii/Pallas-0.5-LASER-0.4](https://huggingface.co/Mihaiii/Pallas-0.5-LASER-0.4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Pallas-0.5-LASER-0.4-Q2_K.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes |
| [Pallas-0.5-LASER-0.4-Q3_K_S.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss |
| [Pallas-0.5-LASER-0.4-Q3_K_M.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss |
| [Pallas-0.5-LASER-0.4-Q3_K_L.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss |
| [Pallas-0.5-LASER-0.4-Q4_0.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Pallas-0.5-LASER-0.4-Q4_K_S.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss |
| [Pallas-0.5-LASER-0.4-Q4_K_M.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended |
| [Pallas-0.5-LASER-0.4-Q5_0.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Pallas-0.5-LASER-0.4-Q5_K_S.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended |
| [Pallas-0.5-LASER-0.4-Q5_K_M.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended |
| [Pallas-0.5-LASER-0.4-Q6_K.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss |
| [Pallas-0.5-LASER-0.4-Q8_0.gguf](https://huggingface.co/tensorblock/Pallas-0.5-LASER-0.4-GGUF/blob/main/Pallas-0.5-LASER-0.4-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Pallas-0.5-LASER-0.4-GGUF --include "Pallas-0.5-LASER-0.4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Pallas-0.5-LASER-0.4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF | tensorblock | 2025-04-21T00:28:58Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:brucethemoose/Yi-34B-200K-DARE-merge-v5",
"base_model:quantized:brucethemoose/Yi-34B-200K-DARE-merge-v5",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-16T22:24:49Z | ---
language:
- en
license: other
library_name: transformers
tags:
- text-generation-inference
- merge
- TensorBlock
- GGUF
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: brucethemoose/Yi-34B-200K-DARE-merge-v5
model-index:
- name: Yi-34B-200K-DARE-merge-v5
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.46
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v5
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## brucethemoose/Yi-34B-200K-DARE-merge-v5 - GGUF
This repo contains GGUF format model files for [brucethemoose/Yi-34B-200K-DARE-merge-v5](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-34B-200K-DARE-merge-v5-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yi-34B-200K-DARE-merge-v5-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss |
| [Yi-34B-200K-DARE-merge-v5-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss |
| [Yi-34B-200K-DARE-merge-v5-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss |
| [Yi-34B-200K-DARE-merge-v5-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-34B-200K-DARE-merge-v5-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss |
| [Yi-34B-200K-DARE-merge-v5-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended |
| [Yi-34B-200K-DARE-merge-v5-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-34B-200K-DARE-merge-v5-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended |
| [Yi-34B-200K-DARE-merge-v5-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended |
| [Yi-34B-200K-DARE-merge-v5-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss |
| [Yi-34B-200K-DARE-merge-v5-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF/blob/main/Yi-34B-200K-DARE-merge-v5-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF --include "Yi-34B-200K-DARE-merge-v5-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Yi-34B-200K-DARE-merge-v5-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Pandora-10.7B-v1-GGUF | tensorblock | 2025-04-21T00:28:52Z | 14 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:jan-ai/Pandora-10.7B-v1",
"base_model:quantized:jan-ai/Pandora-10.7B-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-16T20:32:04Z | ---
license: apache-2.0
language:
- en
tags:
- TensorBlock
- GGUF
base_model: jan-ai/Pandora-10.7B-v1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jan-ai/Pandora-10.7B-v1 - GGUF
This repo contains GGUF format model files for [jan-ai/Pandora-10.7B-v1](https://huggingface.co/jan-ai/Pandora-10.7B-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Pandora-10.7B-v1-Q2_K.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q2_K.gguf) | Q2_K | 4.003 GB | smallest, significant quality loss - not recommended for most purposes |
| [Pandora-10.7B-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q3_K_S.gguf) | Q3_K_S | 4.665 GB | very small, high quality loss |
| [Pandora-10.7B-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q3_K_M.gguf) | Q3_K_M | 5.196 GB | very small, high quality loss |
| [Pandora-10.7B-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q3_K_L.gguf) | Q3_K_L | 5.651 GB | small, substantial quality loss |
| [Pandora-10.7B-v1-Q4_0.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q4_0.gguf) | Q4_0 | 6.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Pandora-10.7B-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q4_K_S.gguf) | Q4_K_S | 6.119 GB | small, greater quality loss |
| [Pandora-10.7B-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q4_K_M.gguf) | Q4_K_M | 6.462 GB | medium, balanced quality - recommended |
| [Pandora-10.7B-v1-Q5_0.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q5_0.gguf) | Q5_0 | 7.397 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Pandora-10.7B-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q5_K_S.gguf) | Q5_K_S | 7.397 GB | large, low quality loss - recommended |
| [Pandora-10.7B-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q5_K_M.gguf) | Q5_K_M | 7.598 GB | large, very low quality loss - recommended |
| [Pandora-10.7B-v1-Q6_K.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q6_K.gguf) | Q6_K | 8.805 GB | very large, extremely low quality loss |
| [Pandora-10.7B-v1-Q8_0.gguf](https://huggingface.co/tensorblock/Pandora-10.7B-v1-GGUF/blob/main/Pandora-10.7B-v1-Q8_0.gguf) | Q8_0 | 11.404 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Pandora-10.7B-v1-GGUF --include "Pandora-10.7B-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Pandora-10.7B-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF | tensorblock | 2025-04-21T00:28:51Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"base_model:GAI-LLM/KoSOLAR-10.7B-mixed-v13",
"base_model:quantized:GAI-LLM/KoSOLAR-10.7B-mixed-v13",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-16T20:25:14Z | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: GAI-LLM/KoSOLAR-10.7B-mixed-v13
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## GAI-LLM/KoSOLAR-10.7B-mixed-v13 - GGUF
This repo contains GGUF format model files for [GAI-LLM/KoSOLAR-10.7B-mixed-v13](https://huggingface.co/GAI-LLM/KoSOLAR-10.7B-mixed-v13).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [KoSOLAR-10.7B-mixed-v13-Q2_K.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q2_K.gguf) | Q2_K | 4.079 GB | smallest, significant quality loss - not recommended for most purposes |
| [KoSOLAR-10.7B-mixed-v13-Q3_K_S.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q3_K_S.gguf) | Q3_K_S | 4.747 GB | very small, high quality loss |
| [KoSOLAR-10.7B-mixed-v13-Q3_K_M.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q3_K_M.gguf) | Q3_K_M | 5.278 GB | very small, high quality loss |
| [KoSOLAR-10.7B-mixed-v13-Q3_K_L.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q3_K_L.gguf) | Q3_K_L | 5.733 GB | small, substantial quality loss |
| [KoSOLAR-10.7B-mixed-v13-Q4_0.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q4_0.gguf) | Q4_0 | 6.163 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [KoSOLAR-10.7B-mixed-v13-Q4_K_S.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q4_K_S.gguf) | Q4_K_S | 6.210 GB | small, greater quality loss |
| [KoSOLAR-10.7B-mixed-v13-Q4_K_M.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q4_K_M.gguf) | Q4_K_M | 6.553 GB | medium, balanced quality - recommended |
| [KoSOLAR-10.7B-mixed-v13-Q5_0.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q5_0.gguf) | Q5_0 | 7.497 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [KoSOLAR-10.7B-mixed-v13-Q5_K_S.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q5_K_S.gguf) | Q5_K_S | 7.497 GB | large, low quality loss - recommended |
| [KoSOLAR-10.7B-mixed-v13-Q5_K_M.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q5_K_M.gguf) | Q5_K_M | 7.697 GB | large, very low quality loss - recommended |
| [KoSOLAR-10.7B-mixed-v13-Q6_K.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q6_K.gguf) | Q6_K | 8.913 GB | very large, extremely low quality loss |
| [KoSOLAR-10.7B-mixed-v13-Q8_0.gguf](https://huggingface.co/tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF/blob/main/KoSOLAR-10.7B-mixed-v13-Q8_0.gguf) | Q8_0 | 11.544 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF --include "KoSOLAR-10.7B-mixed-v13-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/KoSOLAR-10.7B-mixed-v13-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Subsets and Splits