modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
minhxle/truesight-ft-job-0baa27e6-7a7f-426c-833d-b89df0f0c2d6 | minhxle | 2025-06-19T02:30:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:30:08Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
snowman477342/pass-finetune-qwen3-rec | snowman477342 | 2025-06-19T02:28:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:adapter:Qwen/Qwen3-8B-Base",
"region:us"
] | null | 2025-06-19T02:28:26Z | ---
base_model: Qwen/Qwen3-8B-Base
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
barandinho/TDM-8b-v0.1-Q8_0-GGUF | barandinho | 2025-06-19T02:24:06Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"grpo",
"llama-cpp",
"gguf-my-repo",
"base_model:barandinho/TDM-8b-v0.1",
"base_model:quantized:barandinho/TDM-8b-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T02:51:13Z | ---
library_name: transformers
tags:
- trl
- grpo
- llama-cpp
- gguf-my-repo
base_model: barandinho/TDM-8b-v0.1
---
# barandinho/TDM-8b-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`barandinho/TDM-8b-v0.1`](https://huggingface.co/barandinho/TDM-8b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/barandinho/TDM-8b-v0.1) for more details on the model.
**IMPORTANT NOTE :**
Make sure to use this model with **temperature=0.6** and **top_p=0.95** settings for better performance.\
You can also try out **top_k=20** and **min_p=0.01** settings.
Recommended way of using the model is through local llm clients like LM Studio or Ollama.
Make sure to do these settings before using the model:
Set this as system prompt:
```markdown
Sen TÜDÜM (TÜrkçe Düşünen Üretken Model) isimli yardımsever bir yapay zeka modelisin.
Türkçe cevap ver ve cevabını tamamla.
```
For enabling multi-turn conversation paste this jinja template as chat template:
```jinja2
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='', is_first_sp=true, is_last_user=false) %}{%- for message in messages %}{%- if message['role'] == 'system' %}{%- if ns.is_first_sp %}{% set ns.system_prompt = ns.system_prompt + message['content'] %}{% set ns.is_first_sp = false %}{%- else %}{% set ns.system_prompt = ns.system_prompt + '
' + message['content'] %}{%- endif %}{%- endif %}{%- endfor %}{{ bos_token }}{{ ns.system_prompt }}{%- for message in messages %}{% set content = message['content'] %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{%- set ns.is_first = false -%}{%- set ns.is_last_user = true -%}{{'<|User|>' + content + '<|Assistant|>'}}{%- endif %}{%- if message['role'] == 'assistant' %}{%- set content = (message.content.split('</think>')|last).lstrip() %}{%- endif %}{%- if message['role'] == 'assistant' and message['tool_calls'] is defined and message['tool_calls'] is not none %}{%- set ns.is_last_user = false -%}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{%- endif %}{%- set ns.is_first = false %}{%- set ns.is_tool = false -%}{%- set ns.is_output_first = true %}{%- for tool in message['tool_calls'] %}{%- if not ns.is_first %}{%- if content is none %}{{'<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
' + '```json' + '
' + tool['function']['arguments'] + '
' + '```' + '<|tool▁call▁end|>'}}{%- else %}{{content + '<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
' + '```json' + '
' + tool['function']['arguments'] + '
' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- set ns.is_first = true -%}{%- else %}{{'
' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
' + '```json' + '
' + tool['function']['arguments'] + '
' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- endfor %}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- if message['role'] == 'assistant' and (message['tool_calls'] is not defined or message['tool_calls'] is none)%}{%- set ns.is_last_user = false -%}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + content + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{{content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_last_user = false -%}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + content + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'
<|tool▁output▁begin|>' + content + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_last_user and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}
```
## Use with llama.cpp
Below is an auto-generated generic way of using the model with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo barandinho/TDM-8b-v0.1-Q8_0-GGUF --hf-file tdm-8b-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo barandinho/TDM-8b-v0.1-Q8_0-GGUF --hf-file tdm-8b-v0.1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo barandinho/TDM-8b-v0.1-Q8_0-GGUF --hf-file tdm-8b-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo barandinho/TDM-8b-v0.1-Q8_0-GGUF --hf-file tdm-8b-v0.1-q8_0.gguf -c 2048
```
|
allenai/Molmo-7B-O-0924 | allenai | 2025-06-19T02:18:50Z | 6,272 | 159 | transformers | [
"transformers",
"safetensors",
"molmo",
"text-generation",
"multimodal",
"olmo",
"pixmo",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"arxiv:2409.17146",
"base_model:openai/clip-vit-large-patch14-336",
"base_model:finetune:openai/clip-vit-large-patch14-336",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-text-to-text | 2024-09-25T05:53:18Z | ---
license: apache-2.0
language:
- en
base_model:
- openai/clip-vit-large-patch14-336
- allenai/OLMo-7B-1124
pipeline_tag: image-text-to-text
tags:
- multimodal
- olmo
- molmo
- pixmo
library_name: transformers
---
<img src="molmo_logo.png" alt="Logo for the Molmo Project" style="width: auto; height: 50px;">
# Molmo 7B-O
Molmo is a family of open vision-language models developed by the Allen Institute for AI.
Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs.
It has state-of-the-art performance among multimodal models with a similar size while being fully open-source.
You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
**Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog) or the [paper](https://huggingface.co/papers/2409.17146).
Molmo 7B-O is based on [OLMo-7B-1024](https://huggingface.co/allenai/OLMo-7B-1024-preview) (a **preview** of next generation of OLMo models)
and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
[**Sign up here**](https://docs.google.com/forms/d/e/1FAIpQLSdML1MhNNBDsCHpgWG65Oydg2SjZzVasyqlP08nBrWjZp_c7A/viewform) to be the first to know when artifacts are released.
Quick links:
- 💬 [Demo](https://molmo.allenai.org/)
- 📂 [All Models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
- 📃 [Paper](https://molmo.allenai.org/paper.pdf)
- 🎥 [Blog with Videos](https://molmo.allenai.org/blog)
## Quick Start
To run Molmo, first install dependencies:
```bash
pip install einops torchvision
```
Then, follow these steps:
```python
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from PIL import Image
import requests
# load the processor
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-O-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# load the model
model = AutoModelForCausalLM.from_pretrained(
'allenai/Molmo-7B-O-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# process the image and text
inputs = processor.process(
images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
text="Describe this image."
)
# move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
# only get generated tokens; decode them to text
generated_tokens = output[0,inputs['input_ids'].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
# print the generated text
print(generated_text)
# >>> This photograph captures an adorable black Labrador puppy sitting on a weathered
# wooden deck. The deck's planks, which are a mix of light and dark brown with ...
```
To make inference more efficient, run with autocast:
```python
with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
```
We did most of our evaluations in this setting (autocast on, but float32 weights)
To even further reduce the memory requirements, the model can be run with bfloat16 weights:
```python
model.to(dtype=torch.bfloat16)
inputs["images"] = inputs["images"].to(torch.bfloat16)
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
```
Note that this can sometimes change the output of the model compared to running with float32 weights.
## Evaluations
| Model | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating |
|-----------------------------|-----------------------------------------|-----------------------------|
| Molmo 72B | 81.2 | 1077 |
| Molmo 7B-D | 77.3 | 1056 |
| **Molmo 7B-O (this model)** | **74.6** | **1051** |
| MolmoE 1B | 68.6 | 1032 |
| GPT-4o | 78.5 | 1079 |
| GPT-4V | 71.1 | 1041 |
| Gemini 1.5 Pro | 78.3 | 1074 |
| Gemini 1.5 Flash | 75.1 | 1054 |
| Claude 3.5 Sonnet | 76.7 | 1069 |
| Claude 3 Opus | 66.4 | 971 |
| Claude 3 Haiku | 65.3 | 999 |
| Qwen VL2 72B | 79.4 | 1037 |
| Qwen VL2 7B | 73.7 | 1025 |
| Intern VL2 LLAMA 76B | 77.1 | 1018 |
| Intern VL2 8B | 69.4 | 953 |
| Pixtral 12B | 69.5 | 1016 |
| Phi3.5-Vision 4B | 59.7 | 982 |
| PaliGemma 3B | 50.0 | 937 |
| LLAVA OneVision 72B | 76.6 | 1051 |
| LLAVA OneVision 7B | 72.0 | 1024 |
| Cambrian-1 34B | 66.8 | 953 |
| Cambrian-1 8B | 63.4 | 952 |
| xGen - MM - Interleave 4B | 59.5 | 979 |
| LLAVA-1.5 13B | 43.9 | 960 |
| LLAVA-1.5 7B | 40.7 | 951 |
*Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).*
## FAQs
### I'm getting an error a broadcast error when processing images!
Your image might not be in RGB format. You can convert it using the following code snippet:
```python
from PIL import Image
image = Image.open(...)
if image.mode != "RGB":
image = image.convert("RGB")
```
### Molmo doesn't work great with transparent images!
We received reports that Molmo models might struggle with transparent images.
For the time being, we recommend adding a white or dark background to your images before passing them to the model. The code snippet below shows how to do this using the Python Imaging Library (PIL):
```python
# Load the image
url = "..."
image = Image.open(requests.get(url, stream=True).raw)
# Convert the image to grayscale to calculate brightness
gray_image = image.convert('L') # Convert to grayscale
# Calculate the average brightness
stat = ImageStat.Stat(gray_image)
average_brightness = stat.mean[0] # Get the average value
# Define background color based on brightness (threshold can be adjusted)
bg_color = (0, 0, 0) if average_brightness > 127 else (255, 255, 255)
# Create a new image with the same size as the original, filled with the background color
new_image = Image.new('RGB', image.size, bg_color)
# Paste the original image on top of the background (use image as a mask if needed)
new_image.paste(image, (0, 0), image if image.mode == 'RGBA' else None)
# Now you can pass the new_image to Molmo
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
```
## License and Use
This model is licensed under Apache 2.0. It is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
|
quidangz/LLama-8B-Instruct-MultiTask-CE | quidangz | 2025-06-19T02:18:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T02:07:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-GGUF | Alvin-LiuJia | 2025-06-19T02:13:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork",
"base_model:quantized:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T02:02:13Z | ---
base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Alvin-LiuJia
- **License:** apache-2.0
- **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tensorblock/mosaicml_mpt-7b-chat-GGUF | tensorblock | 2025-06-19T02:06:17Z | 121 | 0 | null | [
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"TensorBlock",
"GGUF",
"dataset:jeffwan/sharegpt_vicuna",
"dataset:Hello-SimpleAI/HC3",
"dataset:tatsu-lab/alpaca",
"dataset:Anthropic/hh-rlhf",
"dataset:victor123/evol_instruct_70k",
"base_model:mosaicml/mpt-7b-chat",
"base_model:quantized:mosaicml/mpt-7b-chat",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-10T09:22:45Z | ---
license: cc-by-nc-sa-4.0
datasets:
- jeffwan/sharegpt_vicuna
- Hello-SimpleAI/HC3
- tatsu-lab/alpaca
- Anthropic/hh-rlhf
- victor123/evol_instruct_70k
tags:
- Composer
- MosaicML
- llm-foundry
- TensorBlock
- GGUF
inference: false
base_model: mosaicml/mpt-7b-chat
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mosaicml/mpt-7b-chat - GGUF
This repo contains GGUF format model files for [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mpt-7b-chat-Q2_K.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q2_K.gguf) | Q2_K | 2.559 GB | smallest, significant quality loss - not recommended for most purposes |
| [mpt-7b-chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q3_K_S.gguf) | Q3_K_S | 2.941 GB | very small, high quality loss |
| [mpt-7b-chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q3_K_M.gguf) | Q3_K_M | 3.528 GB | very small, high quality loss |
| [mpt-7b-chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q3_K_L.gguf) | Q3_K_L | 3.847 GB | small, substantial quality loss |
| [mpt-7b-chat-Q4_0.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q4_0.gguf) | Q4_0 | 3.796 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mpt-7b-chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q4_K_S.gguf) | Q4_K_S | 3.830 GB | small, greater quality loss |
| [mpt-7b-chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q4_K_M.gguf) | Q4_K_M | 4.274 GB | medium, balanced quality - recommended |
| [mpt-7b-chat-Q5_0.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q5_0.gguf) | Q5_0 | 4.601 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mpt-7b-chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q5_K_S.gguf) | Q5_K_S | 4.601 GB | large, low quality loss - recommended |
| [mpt-7b-chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q5_K_M.gguf) | Q5_K_M | 4.958 GB | large, very low quality loss - recommended |
| [mpt-7b-chat-Q6_K.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q6_K.gguf) | Q6_K | 5.457 GB | very large, extremely low quality loss |
| [mpt-7b-chat-Q8_0.gguf](https://huggingface.co/tensorblock/mosaicml_mpt-7b-chat-GGUF/blob/main/mpt-7b-chat-Q8_0.gguf) | Q8_0 | 7.067 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mosaicml_mpt-7b-chat-GGUF --include "mpt-7b-chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mosaicml_mpt-7b-chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF | tensorblock | 2025-06-19T02:06:15Z | 121 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:Open-Orca/OpenOrca",
"base_model:CHIH-HUNG/llama-2-13b-OpenOrca_5w",
"base_model:quantized:CHIH-HUNG/llama-2-13b-OpenOrca_5w",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2025-05-10T06:28:58Z | ---
license: llama2
datasets:
- Open-Orca/OpenOrca
tags:
- TensorBlock
- GGUF
base_model: CHIH-HUNG/llama-2-13b-OpenOrca_5w
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## CHIH-HUNG/llama-2-13b-OpenOrca_5w - GGUF
This repo contains GGUF format model files for [CHIH-HUNG/llama-2-13b-OpenOrca_5w](https://huggingface.co/CHIH-HUNG/llama-2-13b-OpenOrca_5w).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-2-13b-OpenOrca_5w-Q2_K.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-13b-OpenOrca_5w-Q3_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [llama-2-13b-OpenOrca_5w-Q3_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [llama-2-13b-OpenOrca_5w-Q3_K_L.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [llama-2-13b-OpenOrca_5w-Q4_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-13b-OpenOrca_5w-Q4_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [llama-2-13b-OpenOrca_5w-Q4_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [llama-2-13b-OpenOrca_5w-Q5_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-13b-OpenOrca_5w-Q5_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [llama-2-13b-OpenOrca_5w-Q5_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [llama-2-13b-OpenOrca_5w-Q6_K.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [llama-2-13b-OpenOrca_5w-Q8_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF/blob/main/llama-2-13b-OpenOrca_5w-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF --include "llama-2-13b-OpenOrca_5w-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CHIH-HUNG_llama-2-13b-OpenOrca_5w-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF | tensorblock | 2025-06-19T02:06:12Z | 48 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"TensorBlock",
"GGUF",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"base_model:Aspik101/StableBeluga-13B-instruct-PL-lora_unload",
"base_model:quantized:Aspik101/StableBeluga-13B-instruct-PL-lora_unload",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T03:12:41Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- TensorBlock
- GGUF
base_model: Aspik101/StableBeluga-13B-instruct-PL-lora_unload
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Aspik101/StableBeluga-13B-instruct-PL-lora_unload - GGUF
This repo contains GGUF format model files for [Aspik101/StableBeluga-13B-instruct-PL-lora_unload](https://huggingface.co/Aspik101/StableBeluga-13B-instruct-PL-lora_unload).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [StableBeluga-13B-instruct-PL-lora_unload-Q2_K.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [StableBeluga-13B-instruct-PL-lora_unload-Q3_K_S.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [StableBeluga-13B-instruct-PL-lora_unload-Q3_K_M.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [StableBeluga-13B-instruct-PL-lora_unload-Q3_K_L.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [StableBeluga-13B-instruct-PL-lora_unload-Q4_0.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [StableBeluga-13B-instruct-PL-lora_unload-Q4_K_S.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [StableBeluga-13B-instruct-PL-lora_unload-Q4_K_M.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [StableBeluga-13B-instruct-PL-lora_unload-Q5_0.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [StableBeluga-13B-instruct-PL-lora_unload-Q5_K_S.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [StableBeluga-13B-instruct-PL-lora_unload-Q5_K_M.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [StableBeluga-13B-instruct-PL-lora_unload-Q6_K.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [StableBeluga-13B-instruct-PL-lora_unload-Q8_0.gguf](https://huggingface.co/tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF/blob/main/StableBeluga-13B-instruct-PL-lora_unload-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF --include "StableBeluga-13B-instruct-PL-lora_unload-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Aspik101_StableBeluga-13B-instruct-PL-lora_unload-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF | tensorblock | 2025-06-19T02:06:01Z | 83 | 0 | null | [
"gguf",
"pytorch",
"causal-lm",
"pythia",
"TensorBlock",
"GGUF",
"en",
"dataset:Anthropic/hh-rlhf",
"base_model:lomahony/eleuther-pythia410m-hh-dpo",
"base_model:quantized:lomahony/eleuther-pythia410m-hh-dpo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-09T18:19:59Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- TensorBlock
- GGUF
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
base_model: lomahony/eleuther-pythia410m-hh-dpo
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## lomahony/eleuther-pythia410m-hh-dpo - GGUF
This repo contains GGUF format model files for [lomahony/eleuther-pythia410m-hh-dpo](https://huggingface.co/lomahony/eleuther-pythia410m-hh-dpo).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [eleuther-pythia410m-hh-dpo-Q2_K.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q2_K.gguf) | Q2_K | 0.174 GB | smallest, significant quality loss - not recommended for most purposes |
| [eleuther-pythia410m-hh-dpo-Q3_K_S.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q3_K_S.gguf) | Q3_K_S | 0.197 GB | very small, high quality loss |
| [eleuther-pythia410m-hh-dpo-Q3_K_M.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q3_K_M.gguf) | Q3_K_M | 0.224 GB | very small, high quality loss |
| [eleuther-pythia410m-hh-dpo-Q3_K_L.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q3_K_L.gguf) | Q3_K_L | 0.240 GB | small, substantial quality loss |
| [eleuther-pythia410m-hh-dpo-Q4_0.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q4_0.gguf) | Q4_0 | 0.244 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [eleuther-pythia410m-hh-dpo-Q4_K_S.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q4_K_S.gguf) | Q4_K_S | 0.246 GB | small, greater quality loss |
| [eleuther-pythia410m-hh-dpo-Q4_K_M.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q4_K_M.gguf) | Q4_K_M | 0.267 GB | medium, balanced quality - recommended |
| [eleuther-pythia410m-hh-dpo-Q5_0.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q5_0.gguf) | Q5_0 | 0.288 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [eleuther-pythia410m-hh-dpo-Q5_K_S.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q5_K_S.gguf) | Q5_K_S | 0.288 GB | large, low quality loss - recommended |
| [eleuther-pythia410m-hh-dpo-Q5_K_M.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q5_K_M.gguf) | Q5_K_M | 0.305 GB | large, very low quality loss - recommended |
| [eleuther-pythia410m-hh-dpo-Q6_K.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q6_K.gguf) | Q6_K | 0.335 GB | very large, extremely low quality loss |
| [eleuther-pythia410m-hh-dpo-Q8_0.gguf](https://huggingface.co/tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF/blob/main/eleuther-pythia410m-hh-dpo-Q8_0.gguf) | Q8_0 | 0.433 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF --include "eleuther-pythia410m-hh-dpo-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/lomahony_eleuther-pythia410m-hh-dpo-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF | tensorblock | 2025-06-19T02:05:39Z | 128 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:Norquinal/claude_multiround_chat_1k",
"base_model:Norquinal/llama-2-7b-claude-chat",
"base_model:quantized:Norquinal/llama-2-7b-claude-chat",
"endpoints_compatible",
"region:us"
] | null | 2025-05-09T01:09:01Z | ---
datasets:
- Norquinal/claude_multiround_chat_1k
tags:
- TensorBlock
- GGUF
base_model: Norquinal/llama-2-7b-claude-chat
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Norquinal/llama-2-7b-claude-chat - GGUF
This repo contains GGUF format model files for [Norquinal/llama-2-7b-claude-chat](https://huggingface.co/Norquinal/llama-2-7b-claude-chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-2-7b-claude-chat-Q2_K.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-7b-claude-chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [llama-2-7b-claude-chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [llama-2-7b-claude-chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [llama-2-7b-claude-chat-Q4_0.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-7b-claude-chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [llama-2-7b-claude-chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [llama-2-7b-claude-chat-Q5_0.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-7b-claude-chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [llama-2-7b-claude-chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [llama-2-7b-claude-chat-Q6_K.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [llama-2-7b-claude-chat-Q8_0.gguf](https://huggingface.co/tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF/blob/main/llama-2-7b-claude-chat-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF --include "llama-2-7b-claude-chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Norquinal_llama-2-7b-claude-chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF | tensorblock | 2025-06-19T02:05:32Z | 38 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"base_model:baichuan-inc/Baichuan-13B-Base",
"base_model:quantized:baichuan-inc/Baichuan-13B-Base",
"region:us"
] | text-generation | 2025-05-08T19:48:48Z | ---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
tags:
- TensorBlock
- GGUF
base_model: baichuan-inc/Baichuan-13B-Base
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## baichuan-inc/Baichuan-13B-Base - GGUF
This repo contains GGUF format model files for [baichuan-inc/Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Baichuan-13B-Base-Q2_K.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q2_K.gguf) | Q2_K | 5.387 GB | smallest, significant quality loss - not recommended for most purposes |
| [Baichuan-13B-Base-Q3_K_S.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q3_K_S.gguf) | Q3_K_S | 6.203 GB | very small, high quality loss |
| [Baichuan-13B-Base-Q3_K_M.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q3_K_M.gguf) | Q3_K_M | 6.848 GB | very small, high quality loss |
| [Baichuan-13B-Base-Q3_K_L.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q3_K_L.gguf) | Q3_K_L | 7.270 GB | small, substantial quality loss |
| [Baichuan-13B-Base-Q4_0.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q4_0.gguf) | Q4_0 | 7.549 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Baichuan-13B-Base-Q4_K_S.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q4_K_S.gguf) | Q4_K_S | 7.934 GB | small, greater quality loss |
| [Baichuan-13B-Base-Q4_K_M.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q4_K_M.gguf) | Q4_K_M | 8.560 GB | medium, balanced quality - recommended |
| [Baichuan-13B-Base-Q5_0.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q5_0.gguf) | Q5_0 | 9.166 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Baichuan-13B-Base-Q5_K_S.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q5_K_S.gguf) | Q5_K_S | 9.341 GB | large, low quality loss - recommended |
| [Baichuan-13B-Base-Q5_K_M.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q5_K_M.gguf) | Q5_K_M | 9.849 GB | large, very low quality loss - recommended |
| [Baichuan-13B-Base-Q6_K.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q6_K.gguf) | Q6_K | 11.563 GB | very large, extremely low quality loss |
| [Baichuan-13B-Base-Q8_0.gguf](https://huggingface.co/tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF/blob/main/Baichuan-13B-Base-Q8_0.gguf) | Q8_0 | 14.097 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF --include "Baichuan-13B-Base-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/baichuan-inc_Baichuan-13B-Base-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF | tensorblock | 2025-06-19T02:05:26Z | 100 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"base_model:heegyu/WizardVicuna-Uncensored-3B-0719",
"base_model:quantized:heegyu/WizardVicuna-Uncensored-3B-0719",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-08T17:34:41Z | ---
license: apache-2.0
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
language:
- en
tags:
- TensorBlock
- GGUF
base_model: heegyu/WizardVicuna-Uncensored-3B-0719
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## heegyu/WizardVicuna-Uncensored-3B-0719 - GGUF
This repo contains GGUF format model files for [heegyu/WizardVicuna-Uncensored-3B-0719](https://huggingface.co/heegyu/WizardVicuna-Uncensored-3B-0719).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [WizardVicuna-Uncensored-3B-0719-Q2_K.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q2_K.gguf) | Q2_K | 1.980 GB | smallest, significant quality loss - not recommended for most purposes |
| [WizardVicuna-Uncensored-3B-0719-Q3_K_S.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q3_K_S.gguf) | Q3_K_S | 1.980 GB | very small, high quality loss |
| [WizardVicuna-Uncensored-3B-0719-Q3_K_M.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q3_K_M.gguf) | Q3_K_M | 2.139 GB | very small, high quality loss |
| [WizardVicuna-Uncensored-3B-0719-Q3_K_L.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q3_K_L.gguf) | Q3_K_L | 2.215 GB | small, substantial quality loss |
| [WizardVicuna-Uncensored-3B-0719-Q4_0.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q4_0.gguf) | Q4_0 | 1.980 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [WizardVicuna-Uncensored-3B-0719-Q4_K_S.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q4_K_S.gguf) | Q4_K_S | 2.403 GB | small, greater quality loss |
| [WizardVicuna-Uncensored-3B-0719-Q4_K_M.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q4_K_M.gguf) | Q4_K_M | 2.580 GB | medium, balanced quality - recommended |
| [WizardVicuna-Uncensored-3B-0719-Q5_0.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q5_0.gguf) | Q5_0 | 2.395 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [WizardVicuna-Uncensored-3B-0719-Q5_K_S.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q5_K_S.gguf) | Q5_K_S | 2.603 GB | large, low quality loss - recommended |
| [WizardVicuna-Uncensored-3B-0719-Q5_K_M.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q5_K_M.gguf) | Q5_K_M | 2.757 GB | large, very low quality loss - recommended |
| [WizardVicuna-Uncensored-3B-0719-Q6_K.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q6_K.gguf) | Q6_K | 3.642 GB | very large, extremely low quality loss |
| [WizardVicuna-Uncensored-3B-0719-Q8_0.gguf](https://huggingface.co/tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF/blob/main/WizardVicuna-Uncensored-3B-0719-Q8_0.gguf) | Q8_0 | 3.642 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF --include "WizardVicuna-Uncensored-3B-0719-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/heegyu_WizardVicuna-Uncensored-3B-0719-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF | tensorblock | 2025-06-19T02:05:17Z | 47 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:klosax/pythia-160m-deduped-step92k-193bt",
"base_model:quantized:klosax/pythia-160m-deduped-step92k-193bt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-08T16:10:45Z | ---
base_model: klosax/pythia-160m-deduped-step92k-193bt
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## klosax/pythia-160m-deduped-step92k-193bt - GGUF
This repo contains GGUF format model files for [klosax/pythia-160m-deduped-step92k-193bt](https://huggingface.co/klosax/pythia-160m-deduped-step92k-193bt).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [pythia-160m-deduped-step92k-193bt-Q2_K.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q2_K.gguf) | Q2_K | 0.078 GB | smallest, significant quality loss - not recommended for most purposes |
| [pythia-160m-deduped-step92k-193bt-Q3_K_S.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q3_K_S.gguf) | Q3_K_S | 0.087 GB | very small, high quality loss |
| [pythia-160m-deduped-step92k-193bt-Q3_K_M.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q3_K_M.gguf) | Q3_K_M | 0.095 GB | very small, high quality loss |
| [pythia-160m-deduped-step92k-193bt-Q3_K_L.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q3_K_L.gguf) | Q3_K_L | 0.099 GB | small, substantial quality loss |
| [pythia-160m-deduped-step92k-193bt-Q4_0.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q4_0.gguf) | Q4_0 | 0.103 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [pythia-160m-deduped-step92k-193bt-Q4_K_S.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q4_K_S.gguf) | Q4_K_S | 0.104 GB | small, greater quality loss |
| [pythia-160m-deduped-step92k-193bt-Q4_K_M.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q4_K_M.gguf) | Q4_K_M | 0.110 GB | medium, balanced quality - recommended |
| [pythia-160m-deduped-step92k-193bt-Q5_0.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q5_0.gguf) | Q5_0 | 0.119 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [pythia-160m-deduped-step92k-193bt-Q5_K_S.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q5_K_S.gguf) | Q5_K_S | 0.119 GB | large, low quality loss - recommended |
| [pythia-160m-deduped-step92k-193bt-Q5_K_M.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q5_K_M.gguf) | Q5_K_M | 0.124 GB | large, very low quality loss - recommended |
| [pythia-160m-deduped-step92k-193bt-Q6_K.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q6_K.gguf) | Q6_K | 0.135 GB | very large, extremely low quality loss |
| [pythia-160m-deduped-step92k-193bt-Q8_0.gguf](https://huggingface.co/tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF/blob/main/pythia-160m-deduped-step92k-193bt-Q8_0.gguf) | Q8_0 | 0.175 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF --include "pythia-160m-deduped-step92k-193bt-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/klosax_pythia-160m-deduped-step92k-193bt-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF | tensorblock | 2025-06-19T02:05:13Z | 67 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"base_model:speechlessai/speechless-llama2-dolphin-orca-platypus-13b",
"base_model:quantized:speechlessai/speechless-llama2-dolphin-orca-platypus-13b",
"region:us"
] | text-generation | 2025-05-08T14:39:45Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: This is a form to enable access to Llama 2 on Hugging Face
after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
and accept our license terms and acceptable use policy before submitting this form.
Requests will be processed in 1-2 days.
extra_gated_prompt: '**Your Hugging Face account email address MUST match the email
you provide on the Meta website, or your request will not be approved.**'
extra_gated_button_content: Submit
extra_gated_fields:
? I agree to share my name, email address and username with Meta and confirm that
I have already been granted download access on the Meta website
: checkbox
language:
- en
datasets:
- ehartford/dolphin
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
library_name: transformers
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- TensorBlock
- GGUF
base_model: speechlessai/speechless-llama2-dolphin-orca-platypus-13b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## speechlessai/speechless-llama2-dolphin-orca-platypus-13b - GGUF
This repo contains GGUF format model files for [speechlessai/speechless-llama2-dolphin-orca-platypus-13b](https://huggingface.co/speechlessai/speechless-llama2-dolphin-orca-platypus-13b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [speechless-llama2-dolphin-orca-platypus-13b-Q2_K.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [speechless-llama2-dolphin-orca-platypus-13b-Q3_K_S.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [speechless-llama2-dolphin-orca-platypus-13b-Q3_K_M.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [speechless-llama2-dolphin-orca-platypus-13b-Q3_K_L.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [speechless-llama2-dolphin-orca-platypus-13b-Q4_0.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [speechless-llama2-dolphin-orca-platypus-13b-Q4_K_S.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [speechless-llama2-dolphin-orca-platypus-13b-Q4_K_M.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [speechless-llama2-dolphin-orca-platypus-13b-Q5_0.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [speechless-llama2-dolphin-orca-platypus-13b-Q5_K_S.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [speechless-llama2-dolphin-orca-platypus-13b-Q5_K_M.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [speechless-llama2-dolphin-orca-platypus-13b-Q6_K.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [speechless-llama2-dolphin-orca-platypus-13b-Q8_0.gguf](https://huggingface.co/tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF/blob/main/speechless-llama2-dolphin-orca-platypus-13b-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF --include "speechless-llama2-dolphin-orca-platypus-13b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/speechlessai_speechless-llama2-dolphin-orca-platypus-13b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/openchat_openchat_v3.2_super-GGUF | tensorblock | 2025-06-19T02:05:02Z | 46 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:openchat/openchat_v3.2_super",
"base_model:quantized:openchat/openchat_v3.2_super",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2025-05-08T09:44:21Z | ---
license: llama2
tags:
- TensorBlock
- GGUF
base_model: openchat/openchat_v3.2_super
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## openchat/openchat_v3.2_super - GGUF
This repo contains GGUF format model files for [openchat/openchat_v3.2_super](https://huggingface.co/openchat/openchat_v3.2_super).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [openchat_v3.2_super-Q2_K.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [openchat_v3.2_super-Q3_K_S.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [openchat_v3.2_super-Q3_K_M.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [openchat_v3.2_super-Q3_K_L.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [openchat_v3.2_super-Q4_0.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openchat_v3.2_super-Q4_K_S.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [openchat_v3.2_super-Q4_K_M.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [openchat_v3.2_super-Q5_0.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openchat_v3.2_super-Q5_K_S.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [openchat_v3.2_super-Q5_K_M.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [openchat_v3.2_super-Q6_K.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [openchat_v3.2_super-Q8_0.gguf](https://huggingface.co/tensorblock/openchat_openchat_v3.2_super-GGUF/blob/main/openchat_v3.2_super-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/openchat_openchat_v3.2_super-GGUF --include "openchat_v3.2_super-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/openchat_openchat_v3.2_super-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF | tensorblock | 2025-06-19T02:04:38Z | 185 | 0 | transformers | [
"transformers",
"gguf",
"llama-2",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"base_model:uukuguy/speechless-codellama-orca-airoboros-13b-0.10e",
"base_model:quantized:uukuguy/speechless-codellama-orca-airoboros-13b-0.10e",
"license:llama2",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-07T18:56:35Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- garage-bAInd/Open-Platypus
tags:
- llama-2
- TensorBlock
- GGUF
license: llama2
base_model: uukuguy/speechless-codellama-orca-airoboros-13b-0.10e
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## uukuguy/speechless-codellama-orca-airoboros-13b-0.10e - GGUF
This repo contains GGUF format model files for [uukuguy/speechless-codellama-orca-airoboros-13b-0.10e](https://huggingface.co/uukuguy/speechless-codellama-orca-airoboros-13b-0.10e).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q2_K.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q3_K_S.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q3_K_M.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q3_K_L.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q4_0.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q4_K_S.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q4_K_M.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q5_0.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q5_K_S.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q5_K_M.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q6_K.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [speechless-codellama-orca-airoboros-13b-0.10e-Q8_0.gguf](https://huggingface.co/tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF/blob/main/speechless-codellama-orca-airoboros-13b-0.10e-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF --include "speechless-codellama-orca-airoboros-13b-0.10e-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/uukuguy_speechless-codellama-orca-airoboros-13b-0.10e-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1-IQ4_XS-GGUF | TheHierophant | 2025-06-19T02:04:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1",
"base_model:quantized:TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-19T02:04:03Z | ---
base_model: TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1-IQ4_XS-GGUF
This model was converted to GGUF format from [`TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1`](https://huggingface.co/TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1-IQ4_XS-GGUF --hf-file umbral-devil-hermes-mind-cursedmatrix-8b-v0.1-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1-IQ4_XS-GGUF --hf-file umbral-devil-hermes-mind-cursedmatrix-8b-v0.1-iq4_xs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1-IQ4_XS-GGUF --hf-file umbral-devil-hermes-mind-cursedmatrix-8b-v0.1-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo TheHierophant/Umbral-Devil-Hermes-Mind-CursedMatrix-8B-V0.1-IQ4_XS-GGUF --hf-file umbral-devil-hermes-mind-cursedmatrix-8b-v0.1-iq4_xs-imat.gguf -c 2048
```
|
tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF | tensorblock | 2025-06-19T02:04:25Z | 82 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:quantized:Open-Orca/Mistral-7B-OpenOrca",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-07T15:46:29Z | ---
datasets:
- Open-Orca/OpenOrca
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: Open-Orca/Mistral-7B-OpenOrca
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Open-Orca/Mistral-7B-OpenOrca - GGUF
This repo contains GGUF format model files for [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-OpenOrca-Q2_K.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-OpenOrca-Q3_K_S.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-OpenOrca-Q3_K_M.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-OpenOrca-Q3_K_L.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-OpenOrca-Q4_0.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-OpenOrca-Q4_K_S.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-OpenOrca-Q4_K_M.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-OpenOrca-Q5_0.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-OpenOrca-Q5_K_S.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-OpenOrca-Q5_K_M.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-OpenOrca-Q6_K.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-OpenOrca-Q8_0.gguf](https://huggingface.co/tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF/blob/main/Mistral-7B-OpenOrca-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF --include "Mistral-7B-OpenOrca-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Open-Orca_Mistral-7B-OpenOrca-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Abe13_full-juni-v0.1-GGUF | tensorblock | 2025-06-19T02:04:07Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:Abe13/full-juni-v0.1",
"base_model:quantized:Abe13/full-juni-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-07T10:43:37Z | ---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: Abe13/full-juni-v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Abe13/full-juni-v0.1 - GGUF
This repo contains GGUF format model files for [Abe13/full-juni-v0.1](https://huggingface.co/Abe13/full-juni-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [full-juni-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [full-juni-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [full-juni-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [full-juni-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [full-juni-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [full-juni-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [full-juni-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [full-juni-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [full-juni-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [full-juni-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [full-juni-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [full-juni-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Abe13_full-juni-v0.1-GGUF/blob/main/full-juni-v0.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Abe13_full-juni-v0.1-GGUF --include "full-juni-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Abe13_full-juni-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
BootesVoid/cmc2ot9lm00dqaqihbt3pnz4i_cmc2ozs3d00e7aqihptp1wayy | BootesVoid | 2025-06-19T02:03:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T02:03:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SOPHW69
---
# Cmc2Ot9Lm00Dqaqihbt3Pnz4I_Cmc2Ozs3D00E7Aqihptp1Wayy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SOPHW69` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SOPHW69",
"lora_weights": "https://huggingface.co/BootesVoid/cmc2ot9lm00dqaqihbt3pnz4i_cmc2ozs3d00e7aqihptp1wayy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc2ot9lm00dqaqihbt3pnz4i_cmc2ozs3d00e7aqihptp1wayy', weight_name='lora.safetensors')
image = pipeline('SOPHW69').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc2ot9lm00dqaqihbt3pnz4i_cmc2ozs3d00e7aqihptp1wayy/discussions) to add images that show off what you’ve made with this LoRA.
|
tensorblock/Mathoctopus_Parallel_7B-GGUF | tensorblock | 2025-06-19T02:03:34Z | 28 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"es",
"zh",
"de",
"ru",
"th",
"sw",
"ja",
"fr",
"bn",
"dataset:Mathoctopus/GSM8KInstruct_Parallel",
"base_model:Mathoctopus/Parallel_7B",
"base_model:quantized:Mathoctopus/Parallel_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T15:41:26Z | ---
license: apache-2.0
datasets:
- Mathoctopus/GSM8KInstruct_Parallel
language:
- en
- es
- zh
- de
- ru
- th
- sw
- ja
- fr
- bn
tags:
- TensorBlock
- GGUF
base_model: Mathoctopus/Parallel_7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Mathoctopus/Parallel_7B - GGUF
This repo contains GGUF format model files for [Mathoctopus/Parallel_7B](https://huggingface.co/Mathoctopus/Parallel_7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Parallel_7B-Q2_K.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Parallel_7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Parallel_7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Parallel_7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Parallel_7B-Q4_0.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Parallel_7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Parallel_7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Parallel_7B-Q5_0.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Parallel_7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Parallel_7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Parallel_7B-Q6_K.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Parallel_7B-Q8_0.gguf](https://huggingface.co/tensorblock/Mathoctopus_Parallel_7B-GGUF/blob/main/Parallel_7B-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mathoctopus_Parallel_7B-GGUF --include "Parallel_7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mathoctopus_Parallel_7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/d-matrix_gpt2-GGUF | tensorblock | 2025-06-19T02:03:32Z | 42 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:d-matrix/gpt2",
"base_model:quantized:d-matrix/gpt2",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T15:11:52Z | ---
license: mit
tags:
- TensorBlock
- GGUF
base_model: d-matrix/gpt2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## d-matrix/gpt2 - GGUF
This repo contains GGUF format model files for [d-matrix/gpt2](https://huggingface.co/d-matrix/gpt2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gpt2-Q2_K.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q2_K.gguf) | Q2_K | 0.069 GB | smallest, significant quality loss - not recommended for most purposes |
| [gpt2-Q3_K_S.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q3_K_S.gguf) | Q3_K_S | 0.074 GB | very small, high quality loss |
| [gpt2-Q3_K_M.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q3_K_M.gguf) | Q3_K_M | 0.081 GB | very small, high quality loss |
| [gpt2-Q3_K_L.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q3_K_L.gguf) | Q3_K_L | 0.086 GB | small, substantial quality loss |
| [gpt2-Q4_0.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q4_0.gguf) | Q4_0 | 0.085 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gpt2-Q4_K_S.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q4_K_S.gguf) | Q4_K_S | 0.085 GB | small, greater quality loss |
| [gpt2-Q4_K_M.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q4_K_M.gguf) | Q4_K_M | 0.091 GB | medium, balanced quality - recommended |
| [gpt2-Q5_0.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q5_0.gguf) | Q5_0 | 0.095 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gpt2-Q5_K_S.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q5_K_S.gguf) | Q5_K_S | 0.095 GB | large, low quality loss - recommended |
| [gpt2-Q5_K_M.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q5_K_M.gguf) | Q5_K_M | 0.100 GB | large, very low quality loss - recommended |
| [gpt2-Q6_K.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q6_K.gguf) | Q6_K | 0.107 GB | very large, extremely low quality loss |
| [gpt2-Q8_0.gguf](https://huggingface.co/tensorblock/d-matrix_gpt2-GGUF/blob/main/gpt2-Q8_0.gguf) | Q8_0 | 0.137 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/d-matrix_gpt2-GGUF --include "gpt2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/d-matrix_gpt2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF | tensorblock | 2025-06-19T02:03:22Z | 68 | 0 | transformers | [
"transformers",
"gguf",
"Mistral",
"Pygmalion",
"llama-2",
"llama-2-7b",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:Delcos/Mistral-Pygmalion-7b",
"base_model:quantized:Delcos/Mistral-Pygmalion-7b",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-06T11:23:40Z | ---
license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- Mistral
- Pygmalion
- llama-2
- llama-2-7b
- TensorBlock
- GGUF
base_model: Delcos/Mistral-Pygmalion-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Delcos/Mistral-Pygmalion-7b - GGUF
This repo contains GGUF format model files for [Delcos/Mistral-Pygmalion-7b](https://huggingface.co/Delcos/Mistral-Pygmalion-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-Pygmalion-7b-Q2_K.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-Pygmalion-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Mistral-Pygmalion-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Mistral-Pygmalion-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Mistral-Pygmalion-7b-Q4_0.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-Pygmalion-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Mistral-Pygmalion-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Mistral-Pygmalion-7b-Q5_0.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-Pygmalion-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Mistral-Pygmalion-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Mistral-Pygmalion-7b-Q6_K.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Mistral-Pygmalion-7b-Q8_0.gguf](https://huggingface.co/tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF/blob/main/Mistral-Pygmalion-7b-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF --include "Mistral-Pygmalion-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Delcos_Mistral-Pygmalion-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CobraMamba_mamba-gpt-7b-GGUF | tensorblock | 2025-06-19T02:03:18Z | 47 | 0 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"TensorBlock",
"GGUF",
"en",
"base_model:CobraMamba/mamba-gpt-7b",
"base_model:quantized:CobraMamba/mamba-gpt-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-05-06T10:20:49Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- TensorBlock
- GGUF
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
base_model: CobraMamba/mamba-gpt-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## CobraMamba/mamba-gpt-7b - GGUF
This repo contains GGUF format model files for [CobraMamba/mamba-gpt-7b](https://huggingface.co/CobraMamba/mamba-gpt-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mamba-gpt-7b-Q2_K.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [mamba-gpt-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [mamba-gpt-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [mamba-gpt-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [mamba-gpt-7b-Q4_0.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mamba-gpt-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [mamba-gpt-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [mamba-gpt-7b-Q5_0.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mamba-gpt-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [mamba-gpt-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [mamba-gpt-7b-Q6_K.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [mamba-gpt-7b-Q8_0.gguf](https://huggingface.co/tensorblock/CobraMamba_mamba-gpt-7b-GGUF/blob/main/mamba-gpt-7b-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CobraMamba_mamba-gpt-7b-GGUF --include "mamba-gpt-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CobraMamba_mamba-gpt-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF | tensorblock | 2025-06-19T02:03:15Z | 77 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:MNC-LLM/Mistral-7B-O3k-Au1k-ver0.7",
"base_model:quantized:MNC-LLM/Mistral-7B-O3k-Au1k-ver0.7",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T10:03:19Z | ---
base_model: MNC-LLM/Mistral-7B-O3k-Au1k-ver0.7
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MNC-LLM/Mistral-7B-O3k-Au1k-ver0.7 - GGUF
This repo contains GGUF format model files for [MNC-LLM/Mistral-7B-O3k-Au1k-ver0.7](https://huggingface.co/MNC-LLM/Mistral-7B-O3k-Au1k-ver0.7).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-O3k-Au1k-ver0.7-Q2_K.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-O3k-Au1k-ver0.7-Q3_K_S.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-O3k-Au1k-ver0.7-Q3_K_M.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-O3k-Au1k-ver0.7-Q3_K_L.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-O3k-Au1k-ver0.7-Q4_0.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-O3k-Au1k-ver0.7-Q4_K_S.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-O3k-Au1k-ver0.7-Q4_K_M.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-O3k-Au1k-ver0.7-Q5_0.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-O3k-Au1k-ver0.7-Q5_K_S.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-O3k-Au1k-ver0.7-Q5_K_M.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-O3k-Au1k-ver0.7-Q6_K.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-O3k-Au1k-ver0.7-Q8_0.gguf](https://huggingface.co/tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF/blob/main/Mistral-7B-O3k-Au1k-ver0.7-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF --include "Mistral-7B-O3k-Au1k-ver0.7-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MNC-LLM_Mistral-7B-O3k-Au1k-ver0.7-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF | tensorblock | 2025-06-19T02:03:04Z | 126 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset:LeoLM/German_Songs",
"base_model:jamesdborin/LeoLM-hesseianai-13b-chat",
"base_model:quantized:jamesdborin/LeoLM-hesseianai-13b-chat",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-06T05:54:49Z | ---
datasets:
- LeoLM/OpenSchnabeltier
- OpenAssistant/OASST-DE
- FreedomIntelligence/alpaca-gpt4-deutsch
- FreedomIntelligence/evol-instruct-deutsch
- LeoLM/German_Poems
- LeoLM/German_Songs
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
base_model: jamesdborin/LeoLM-hesseianai-13b-chat
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## jamesdborin/LeoLM-hesseianai-13b-chat - GGUF
This repo contains GGUF format model files for [jamesdborin/LeoLM-hesseianai-13b-chat](https://huggingface.co/jamesdborin/LeoLM-hesseianai-13b-chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [LeoLM-hesseianai-13b-chat-Q2_K.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q2_K.gguf) | Q2_K | 4.855 GB | smallest, significant quality loss - not recommended for most purposes |
| [LeoLM-hesseianai-13b-chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q3_K_S.gguf) | Q3_K_S | 5.660 GB | very small, high quality loss |
| [LeoLM-hesseianai-13b-chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q3_K_M.gguf) | Q3_K_M | 6.339 GB | very small, high quality loss |
| [LeoLM-hesseianai-13b-chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [LeoLM-hesseianai-13b-chat-Q4_0.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q4_0.gguf) | Q4_0 | 7.367 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [LeoLM-hesseianai-13b-chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q4_K_S.gguf) | Q4_K_S | 7.424 GB | small, greater quality loss |
| [LeoLM-hesseianai-13b-chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q4_K_M.gguf) | Q4_K_M | 7.867 GB | medium, balanced quality - recommended |
| [LeoLM-hesseianai-13b-chat-Q5_0.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q5_0.gguf) | Q5_0 | 8.973 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [LeoLM-hesseianai-13b-chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q5_K_S.gguf) | Q5_K_S | 8.973 GB | large, low quality loss - recommended |
| [LeoLM-hesseianai-13b-chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q5_K_M.gguf) | Q5_K_M | 9.231 GB | large, very low quality loss - recommended |
| [LeoLM-hesseianai-13b-chat-Q6_K.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q6_K.gguf) | Q6_K | 10.680 GB | very large, extremely low quality loss |
| [LeoLM-hesseianai-13b-chat-Q8_0.gguf](https://huggingface.co/tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF/blob/main/LeoLM-hesseianai-13b-chat-Q8_0.gguf) | Q8_0 | 13.833 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF --include "LeoLM-hesseianai-13b-chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/jamesdborin_LeoLM-hesseianai-13b-chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF | tensorblock | 2025-06-19T02:02:57Z | 60 | 0 | null | [
"gguf",
"medical",
"TensorBlock",
"GGUF",
"text2text-generation",
"en",
"dataset:starmpcc/Asclepius-Synthetic-Clinical-Notes",
"base_model:starmpcc/Asclepius-Llama2-13B",
"base_model:quantized:starmpcc/Asclepius-Llama2-13B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-06T04:28:51Z | ---
license: cc-by-nc-4.0
datasets:
- starmpcc/Asclepius-Synthetic-Clinical-Notes
language:
- en
pipeline_tag: text2text-generation
tags:
- medical
- TensorBlock
- GGUF
base_model: starmpcc/Asclepius-Llama2-13B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## starmpcc/Asclepius-Llama2-13B - GGUF
This repo contains GGUF format model files for [starmpcc/Asclepius-Llama2-13B](https://huggingface.co/starmpcc/Asclepius-Llama2-13B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Asclepius-Llama2-13B-Q2_K.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [Asclepius-Llama2-13B-Q3_K_S.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [Asclepius-Llama2-13B-Q3_K_M.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [Asclepius-Llama2-13B-Q3_K_L.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [Asclepius-Llama2-13B-Q4_0.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Asclepius-Llama2-13B-Q4_K_S.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [Asclepius-Llama2-13B-Q4_K_M.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [Asclepius-Llama2-13B-Q5_0.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Asclepius-Llama2-13B-Q5_K_S.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [Asclepius-Llama2-13B-Q5_K_M.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [Asclepius-Llama2-13B-Q6_K.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [Asclepius-Llama2-13B-Q8_0.gguf](https://huggingface.co/tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF/blob/main/Asclepius-Llama2-13B-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF --include "Asclepius-Llama2-13B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/starmpcc_Asclepius-Llama2-13B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF | tensorblock | 2025-06-19T02:02:25Z | 32 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"base_model:OpenBuddy/openbuddy-mistral-7b-v13-base",
"base_model:quantized:OpenBuddy/openbuddy-mistral-7b-v13-base",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-05T19:36:25Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: OpenBuddy/openbuddy-mistral-7b-v13-base
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## OpenBuddy/openbuddy-mistral-7b-v13-base - GGUF
This repo contains GGUF format model files for [OpenBuddy/openbuddy-mistral-7b-v13-base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [openbuddy-mistral-7b-v13-base-Q2_K.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q2_K.gguf) | Q2_K | 2.741 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-mistral-7b-v13-base-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q3_K_S.gguf) | Q3_K_S | 3.188 GB | very small, high quality loss |
| [openbuddy-mistral-7b-v13-base-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q3_K_M.gguf) | Q3_K_M | 3.543 GB | very small, high quality loss |
| [openbuddy-mistral-7b-v13-base-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q3_K_L.gguf) | Q3_K_L | 3.846 GB | small, substantial quality loss |
| [openbuddy-mistral-7b-v13-base-Q4_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q4_0.gguf) | Q4_0 | 4.135 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-mistral-7b-v13-base-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q4_K_S.gguf) | Q4_K_S | 4.167 GB | small, greater quality loss |
| [openbuddy-mistral-7b-v13-base-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q4_K_M.gguf) | Q4_K_M | 4.395 GB | medium, balanced quality - recommended |
| [openbuddy-mistral-7b-v13-base-Q5_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q5_0.gguf) | Q5_0 | 5.026 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-mistral-7b-v13-base-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q5_K_S.gguf) | Q5_K_S | 5.026 GB | large, low quality loss - recommended |
| [openbuddy-mistral-7b-v13-base-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q5_K_M.gguf) | Q5_K_M | 5.160 GB | large, very low quality loss - recommended |
| [openbuddy-mistral-7b-v13-base-Q6_K.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q6_K.gguf) | Q6_K | 5.973 GB | very large, extremely low quality loss |
| [openbuddy-mistral-7b-v13-base-Q8_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base-Q8_0.gguf) | Q8_0 | 7.736 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF --include "openbuddy-mistral-7b-v13-base-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenBuddy_openbuddy-mistral-7b-v13-base-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF | tensorblock | 2025-06-19T02:02:11Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"base_model:yentinglin/Taiwan-LLM-7B-v2.0-chat",
"base_model:quantized:yentinglin/Taiwan-LLM-7B-v2.0-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-05T17:54:52Z | ---
license: apache-2.0
language:
- zh
widget:
- text: 'A chat between a curious user and an artificial intelligence assistant. The
assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:'
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Acknowledge license to accept the repository.
extra_gated_prompt: Please contact the author for access.
extra_gated_button_content: Acknowledge license 同意以上內容
extra_gated_fields:
Name: text
Mail: text
Organization: text
Country: text
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
tags:
- TensorBlock
- GGUF
base_model: yentinglin/Taiwan-LLM-7B-v2.0-chat
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## yentinglin/Taiwan-LLM-7B-v2.0-chat - GGUF
This repo contains GGUF format model files for [yentinglin/Taiwan-LLM-7B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.0-chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
{system_prompt}</s>USER: {prompt}</s>ASSISTANT:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Taiwan-LLM-7B-v2.0-chat-Q2_K.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Taiwan-LLM-7B-v2.0-chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Taiwan-LLM-7B-v2.0-chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Taiwan-LLM-7B-v2.0-chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Taiwan-LLM-7B-v2.0-chat-Q4_0.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Taiwan-LLM-7B-v2.0-chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Taiwan-LLM-7B-v2.0-chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Taiwan-LLM-7B-v2.0-chat-Q5_0.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Taiwan-LLM-7B-v2.0-chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Taiwan-LLM-7B-v2.0-chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Taiwan-LLM-7B-v2.0-chat-Q6_K.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Taiwan-LLM-7B-v2.0-chat-Q8_0.gguf](https://huggingface.co/tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF/blob/main/Taiwan-LLM-7B-v2.0-chat-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF --include "Taiwan-LLM-7B-v2.0-chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/yentinglin_Taiwan-LLM-7B-v2.0-chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF | tensorblock | 2025-06-19T02:02:05Z | 22 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"ko",
"dataset:Open-Orca/OpenOrca",
"dataset:kyujinpy/KOR-OpenOrca-Platypus",
"base_model:Korabbit/llama-2-ko-7b-bilingual",
"base_model:quantized:Korabbit/llama-2-ko-7b-bilingual",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T16:22:00Z | ---
license: llama2
datasets:
- Open-Orca/OpenOrca
- kyujinpy/KOR-OpenOrca-Platypus
language:
- en
- ko
tags:
- TensorBlock
- GGUF
base_model: Korabbit/llama-2-ko-7b-bilingual
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Korabbit/llama-2-ko-7b-bilingual - GGUF
This repo contains GGUF format model files for [Korabbit/llama-2-ko-7b-bilingual](https://huggingface.co/Korabbit/llama-2-ko-7b-bilingual).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-2-ko-7b-bilingual-Q2_K.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q2_K.gguf) | Q2_K | 2.601 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-ko-7b-bilingual-Q3_K_S.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q3_K_S.gguf) | Q3_K_S | 3.022 GB | very small, high quality loss |
| [llama-2-ko-7b-bilingual-Q3_K_M.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q3_K_M.gguf) | Q3_K_M | 3.372 GB | very small, high quality loss |
| [llama-2-ko-7b-bilingual-Q3_K_L.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q3_K_L.gguf) | Q3_K_L | 3.671 GB | small, substantial quality loss |
| [llama-2-ko-7b-bilingual-Q4_0.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q4_0.gguf) | Q4_0 | 3.907 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-ko-7b-bilingual-Q4_K_S.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q4_K_S.gguf) | Q4_K_S | 3.938 GB | small, greater quality loss |
| [llama-2-ko-7b-bilingual-Q4_K_M.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q4_K_M.gguf) | Q4_K_M | 4.163 GB | medium, balanced quality - recommended |
| [llama-2-ko-7b-bilingual-Q5_0.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q5_0.gguf) | Q5_0 | 4.741 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-ko-7b-bilingual-Q5_K_S.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q5_K_S.gguf) | Q5_K_S | 4.741 GB | large, low quality loss - recommended |
| [llama-2-ko-7b-bilingual-Q5_K_M.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q5_K_M.gguf) | Q5_K_M | 4.872 GB | large, very low quality loss - recommended |
| [llama-2-ko-7b-bilingual-Q6_K.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q6_K.gguf) | Q6_K | 5.626 GB | very large, extremely low quality loss |
| [llama-2-ko-7b-bilingual-Q8_0.gguf](https://huggingface.co/tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF/blob/main/llama-2-ko-7b-bilingual-Q8_0.gguf) | Q8_0 | 7.286 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF --include "llama-2-ko-7b-bilingual-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Korabbit_llama-2-ko-7b-bilingual-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF | tensorblock | 2025-06-19T02:01:59Z | 38 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"base_model:OpenBuddy/openbuddy-openllama-7b-v12-bf16",
"base_model:quantized:OpenBuddy/openbuddy-openllama-7b-v12-bf16",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-05T14:22:04Z | ---
license: apache-2.0
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: OpenBuddy/openbuddy-openllama-7b-v12-bf16
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## OpenBuddy/openbuddy-openllama-7b-v12-bf16 - GGUF
This repo contains GGUF format model files for [OpenBuddy/openbuddy-openllama-7b-v12-bf16](https://huggingface.co/OpenBuddy/openbuddy-openllama-7b-v12-bf16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [openbuddy-openllama-7b-v12-bf16-Q2_K.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q2_K.gguf) | Q2_K | 2.557 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-openllama-7b-v12-bf16-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q3_K_S.gguf) | Q3_K_S | 2.975 GB | very small, high quality loss |
| [openbuddy-openllama-7b-v12-bf16-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q3_K_M.gguf) | Q3_K_M | 3.324 GB | very small, high quality loss |
| [openbuddy-openllama-7b-v12-bf16-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q3_K_L.gguf) | Q3_K_L | 3.623 GB | small, substantial quality loss |
| [openbuddy-openllama-7b-v12-bf16-Q4_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q4_0.gguf) | Q4_0 | 3.855 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-openllama-7b-v12-bf16-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q4_K_S.gguf) | Q4_K_S | 3.886 GB | small, greater quality loss |
| [openbuddy-openllama-7b-v12-bf16-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q4_K_M.gguf) | Q4_K_M | 4.110 GB | medium, balanced quality - recommended |
| [openbuddy-openllama-7b-v12-bf16-Q5_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q5_0.gguf) | Q5_0 | 4.683 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-openllama-7b-v12-bf16-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q5_K_S.gguf) | Q5_K_S | 4.683 GB | large, low quality loss - recommended |
| [openbuddy-openllama-7b-v12-bf16-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q5_K_M.gguf) | Q5_K_M | 4.815 GB | large, very low quality loss - recommended |
| [openbuddy-openllama-7b-v12-bf16-Q6_K.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q6_K.gguf) | Q6_K | 5.564 GB | very large, extremely low quality loss |
| [openbuddy-openllama-7b-v12-bf16-Q8_0.gguf](https://huggingface.co/tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16-Q8_0.gguf) | Q8_0 | 7.206 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF --include "openbuddy-openllama-7b-v12-bf16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenBuddy_openbuddy-openllama-7b-v12-bf16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Voicelab_trurl-2-13b-academic-GGUF | tensorblock | 2025-06-19T02:01:57Z | 64 | 0 | null | [
"gguf",
"voicelab",
"pytorch",
"llama-2",
"trurl",
"trurl-2",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"pl",
"base_model:Voicelab/trurl-2-13b-academic",
"base_model:quantized:Voicelab/trurl-2-13b-academic",
"region:us"
] | text-generation | 2025-05-05T14:02:57Z | ---
language:
- en
- pl
pipeline_tag: text-generation
inference: false
tags:
- voicelab
- pytorch
- llama-2
- trurl
- trurl-2
- TensorBlock
- GGUF
base_model: Voicelab/trurl-2-13b-academic
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Voicelab/trurl-2-13b-academic - GGUF
This repo contains GGUF format model files for [Voicelab/trurl-2-13b-academic](https://huggingface.co/Voicelab/trurl-2-13b-academic).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [trurl-2-13b-academic-Q2_K.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [trurl-2-13b-academic-Q3_K_S.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [trurl-2-13b-academic-Q3_K_M.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [trurl-2-13b-academic-Q3_K_L.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [trurl-2-13b-academic-Q4_0.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [trurl-2-13b-academic-Q4_K_S.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [trurl-2-13b-academic-Q4_K_M.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [trurl-2-13b-academic-Q5_0.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [trurl-2-13b-academic-Q5_K_S.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [trurl-2-13b-academic-Q5_K_M.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [trurl-2-13b-academic-Q6_K.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [trurl-2-13b-academic-Q8_0.gguf](https://huggingface.co/tensorblock/Voicelab_trurl-2-13b-academic-GGUF/blob/main/trurl-2-13b-academic-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Voicelab_trurl-2-13b-academic-GGUF --include "trurl-2-13b-academic-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Voicelab_trurl-2-13b-academic-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF | tensorblock | 2025-06-19T02:01:50Z | 15 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:DopeorNope/Zero_COKE_K-13B",
"base_model:quantized:DopeorNope/Zero_COKE_K-13B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T11:41:25Z | ---
base_model: DopeorNope/Zero_COKE_K-13B
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## DopeorNope/Zero_COKE_K-13B - GGUF
This repo contains GGUF format model files for [DopeorNope/Zero_COKE_K-13B](https://huggingface.co/DopeorNope/Zero_COKE_K-13B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Zero_COKE_K-13B-Q2_K.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [Zero_COKE_K-13B-Q3_K_S.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [Zero_COKE_K-13B-Q3_K_M.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [Zero_COKE_K-13B-Q3_K_L.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [Zero_COKE_K-13B-Q4_0.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Zero_COKE_K-13B-Q4_K_S.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [Zero_COKE_K-13B-Q4_K_M.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [Zero_COKE_K-13B-Q5_0.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Zero_COKE_K-13B-Q5_K_S.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [Zero_COKE_K-13B-Q5_K_M.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [Zero_COKE_K-13B-Q6_K.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [Zero_COKE_K-13B-Q8_0.gguf](https://huggingface.co/tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF/blob/main/Zero_COKE_K-13B-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF --include "Zero_COKE_K-13B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/DopeorNope_Zero_COKE_K-13B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/maywell_Synatra-7B-v0.3-base-GGUF | tensorblock | 2025-06-19T02:01:39Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"base_model:maywell/Synatra-7B-v0.3-base",
"base_model:quantized:maywell/Synatra-7B-v0.3-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-05T08:39:28Z | ---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
tags:
- TensorBlock
- GGUF
base_model: maywell/Synatra-7B-v0.3-base
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## maywell/Synatra-7B-v0.3-base - GGUF
This repo contains GGUF format model files for [maywell/Synatra-7B-v0.3-base](https://huggingface.co/maywell/Synatra-7B-v0.3-base).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Synatra-7B-v0.3-base-Q2_K.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Synatra-7B-v0.3-base-Q3_K_S.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Synatra-7B-v0.3-base-Q3_K_M.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Synatra-7B-v0.3-base-Q3_K_L.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Synatra-7B-v0.3-base-Q4_0.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Synatra-7B-v0.3-base-Q4_K_S.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Synatra-7B-v0.3-base-Q4_K_M.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Synatra-7B-v0.3-base-Q5_0.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Synatra-7B-v0.3-base-Q5_K_S.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Synatra-7B-v0.3-base-Q5_K_M.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Synatra-7B-v0.3-base-Q6_K.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Synatra-7B-v0.3-base-Q8_0.gguf](https://huggingface.co/tensorblock/maywell_Synatra-7B-v0.3-base-GGUF/blob/main/Synatra-7B-v0.3-base-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/maywell_Synatra-7B-v0.3-base-GGUF --include "Synatra-7B-v0.3-base-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/maywell_Synatra-7B-v0.3-base-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/defog_sqlcoder2-GGUF | tensorblock | 2025-06-19T02:01:30Z | 153 | 0 | null | [
"gguf",
"code",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:defog/sqlcoder2",
"base_model:quantized:defog/sqlcoder2",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T06:25:43Z | ---
license: other
language:
- en
pipeline_tag: text-generation
tags:
- code
- TensorBlock
- GGUF
base_model: defog/sqlcoder2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## defog/sqlcoder2 - GGUF
This repo contains GGUF format model files for [defog/sqlcoder2](https://huggingface.co/defog/sqlcoder2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [sqlcoder2-Q2_K.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q2_K.gguf) | Q2_K | 6.303 GB | smallest, significant quality loss - not recommended for most purposes |
| [sqlcoder2-Q3_K_S.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q3_K_S.gguf) | Q3_K_S | 7.107 GB | very small, high quality loss |
| [sqlcoder2-Q3_K_M.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q3_K_M.gguf) | Q3_K_M | 8.356 GB | very small, high quality loss |
| [sqlcoder2-Q3_K_L.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q3_K_L.gguf) | Q3_K_L | 9.262 GB | small, substantial quality loss |
| [sqlcoder2-Q4_0.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q4_0.gguf) | Q4_0 | 9.160 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sqlcoder2-Q4_K_S.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q4_K_S.gguf) | Q4_K_S | 9.255 GB | small, greater quality loss |
| [sqlcoder2-Q4_K_M.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q4_K_M.gguf) | Q4_K_M | 10.136 GB | medium, balanced quality - recommended |
| [sqlcoder2-Q5_0.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q5_0.gguf) | Q5_0 | 11.093 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sqlcoder2-Q5_K_S.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q5_K_S.gguf) | Q5_K_S | 11.093 GB | large, low quality loss - recommended |
| [sqlcoder2-Q5_K_M.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q5_K_M.gguf) | Q5_K_M | 11.703 GB | large, very low quality loss - recommended |
| [sqlcoder2-Q6_K.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q6_K.gguf) | Q6_K | 13.147 GB | very large, extremely low quality loss |
| [sqlcoder2-Q8_0.gguf](https://huggingface.co/tensorblock/defog_sqlcoder2-GGUF/blob/main/sqlcoder2-Q8_0.gguf) | Q8_0 | 16.966 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/defog_sqlcoder2-GGUF --include "sqlcoder2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/defog_sqlcoder2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/upstage_llama-30b-instruct-2048-GGUF | tensorblock | 2025-06-19T02:00:53Z | 48 | 0 | null | [
"gguf",
"upstage",
"llama",
"instruct",
"instruction",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:sciq",
"dataset:metaeval/ScienceQA_text_only",
"dataset:GAIR/lima",
"dataset:Open-Orca/OpenOrca",
"dataset:openbookqa",
"base_model:upstage/llama-30b-instruct-2048",
"base_model:quantized:upstage/llama-30b-instruct-2048",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T20:42:23Z | ---
datasets:
- sciq
- metaeval/ScienceQA_text_only
- GAIR/lima
- Open-Orca/OpenOrca
- openbookqa
language:
- en
tags:
- upstage
- llama
- instruct
- instruction
- TensorBlock
- GGUF
pipeline_tag: text-generation
base_model: upstage/llama-30b-instruct-2048
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## upstage/llama-30b-instruct-2048 - GGUF
This repo contains GGUF format model files for [upstage/llama-30b-instruct-2048](https://huggingface.co/upstage/llama-30b-instruct-2048).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-30b-instruct-2048-Q2_K.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q2_K.gguf) | Q2_K | 12.049 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-30b-instruct-2048-Q3_K_S.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q3_K_S.gguf) | Q3_K_S | 14.064 GB | very small, high quality loss |
| [llama-30b-instruct-2048-Q3_K_M.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q3_K_M.gguf) | Q3_K_M | 15.776 GB | very small, high quality loss |
| [llama-30b-instruct-2048-Q3_K_L.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q3_K_L.gguf) | Q3_K_L | 17.280 GB | small, substantial quality loss |
| [llama-30b-instruct-2048-Q4_0.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q4_0.gguf) | Q4_0 | 18.356 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-30b-instruct-2048-Q4_K_S.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q4_K_S.gguf) | Q4_K_S | 18.482 GB | small, greater quality loss |
| [llama-30b-instruct-2048-Q4_K_M.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q4_K_M.gguf) | Q4_K_M | 19.621 GB | medium, balanced quality - recommended |
| [llama-30b-instruct-2048-Q5_0.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q5_0.gguf) | Q5_0 | 22.395 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-30b-instruct-2048-Q5_K_S.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q5_K_S.gguf) | Q5_K_S | 22.395 GB | large, low quality loss - recommended |
| [llama-30b-instruct-2048-Q5_K_M.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q5_K_M.gguf) | Q5_K_M | 23.047 GB | large, very low quality loss - recommended |
| [llama-30b-instruct-2048-Q6_K.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q6_K.gguf) | Q6_K | 26.687 GB | very large, extremely low quality loss |
| [llama-30b-instruct-2048-Q8_0.gguf](https://huggingface.co/tensorblock/upstage_llama-30b-instruct-2048-GGUF/blob/main/llama-30b-instruct-2048-Q8_0.gguf) | Q8_0 | 34.565 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/upstage_llama-30b-instruct-2048-GGUF --include "llama-30b-instruct-2048-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/upstage_llama-30b-instruct-2048-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF | tensorblock | 2025-06-19T02:00:40Z | 37 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:mhhmm/leetcode-solutions-python",
"base_model:fireballoon/baichuan-vicuna-chinese-7b",
"base_model:quantized:fireballoon/baichuan-vicuna-chinese-7b",
"region:us"
] | text-generation | 2025-05-04T13:45:02Z | ---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- QingyiSi/Alpaca-CoT
- mhhmm/leetcode-solutions-python
tags:
- TensorBlock
- GGUF
base_model: fireballoon/baichuan-vicuna-chinese-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## fireballoon/baichuan-vicuna-chinese-7b - GGUF
This repo contains GGUF format model files for [fireballoon/baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [baichuan-vicuna-chinese-7b-Q2_K.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q2_K.gguf) | Q2_K | 2.684 GB | smallest, significant quality loss - not recommended for most purposes |
| [baichuan-vicuna-chinese-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q3_K_S.gguf) | Q3_K_S | 3.113 GB | very small, high quality loss |
| [baichuan-vicuna-chinese-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q3_K_M.gguf) | Q3_K_M | 3.462 GB | very small, high quality loss |
| [baichuan-vicuna-chinese-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q3_K_L.gguf) | Q3_K_L | 3.762 GB | small, substantial quality loss |
| [baichuan-vicuna-chinese-7b-Q4_0.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q4_0.gguf) | Q4_0 | 4.008 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [baichuan-vicuna-chinese-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q4_K_S.gguf) | Q4_K_S | 4.039 GB | small, greater quality loss |
| [baichuan-vicuna-chinese-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q4_K_M.gguf) | Q4_K_M | 4.263 GB | medium, balanced quality - recommended |
| [baichuan-vicuna-chinese-7b-Q5_0.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q5_0.gguf) | Q5_0 | 4.850 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [baichuan-vicuna-chinese-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q5_K_S.gguf) | Q5_K_S | 4.850 GB | large, low quality loss - recommended |
| [baichuan-vicuna-chinese-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q5_K_M.gguf) | Q5_K_M | 4.981 GB | large, very low quality loss - recommended |
| [baichuan-vicuna-chinese-7b-Q6_K.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q6_K.gguf) | Q6_K | 5.745 GB | very large, extremely low quality loss |
| [baichuan-vicuna-chinese-7b-Q8_0.gguf](https://huggingface.co/tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF/blob/main/baichuan-vicuna-chinese-7b-Q8_0.gguf) | Q8_0 | 7.440 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF --include "baichuan-vicuna-chinese-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/fireballoon_baichuan-vicuna-chinese-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF | tensorblock | 2025-06-19T01:59:42Z | 25 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:SciPhi/SciPhi-Self-RAG-Mistral-7B-32k",
"base_model:quantized:SciPhi/SciPhi-Self-RAG-Mistral-7B-32k",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T12:15:51Z | ---
license: mit
tags:
- TensorBlock
- GGUF
base_model: SciPhi/SciPhi-Self-RAG-Mistral-7B-32k
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## SciPhi/SciPhi-Self-RAG-Mistral-7B-32k - GGUF
This repo contains GGUF format model files for [SciPhi/SciPhi-Self-RAG-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Self-RAG-Mistral-7B-32k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q2_K.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q3_K_S.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q3_K_M.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q3_K_L.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q4_0.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q4_K_S.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q4_K_M.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q4_K_M.gguf) | Q4_K_M | 4.369 GB | medium, balanced quality - recommended |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q5_0.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q5_K_S.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q5_K_M.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q5_K_M.gguf) | Q5_K_M | 5.132 GB | large, very low quality loss - recommended |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q6_K.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [SciPhi-Self-RAG-Mistral-7B-32k-Q8_0.gguf](https://huggingface.co/tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF/blob/main/SciPhi-Self-RAG-Mistral-7B-32k-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF --include "SciPhi-Self-RAG-Mistral-7B-32k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SciPhi_SciPhi-Self-RAG-Mistral-7B-32k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ibranze_araproje-llama2-7b-hf-GGUF | tensorblock | 2025-06-19T01:58:36Z | 13 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:ibranze/araproje-llama2-7b-hf",
"base_model:quantized:ibranze/araproje-llama2-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T00:44:17Z | ---
base_model: ibranze/araproje-llama2-7b-hf
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ibranze/araproje-llama2-7b-hf - GGUF
This repo contains GGUF format model files for [ibranze/araproje-llama2-7b-hf](https://huggingface.co/ibranze/araproje-llama2-7b-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [araproje-llama2-7b-hf-Q2_K.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [araproje-llama2-7b-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [araproje-llama2-7b-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [araproje-llama2-7b-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [araproje-llama2-7b-hf-Q4_0.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [araproje-llama2-7b-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [araproje-llama2-7b-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [araproje-llama2-7b-hf-Q5_0.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [araproje-llama2-7b-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [araproje-llama2-7b-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [araproje-llama2-7b-hf-Q6_K.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [araproje-llama2-7b-hf-Q8_0.gguf](https://huggingface.co/tensorblock/ibranze_araproje-llama2-7b-hf-GGUF/blob/main/araproje-llama2-7b-hf-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ibranze_araproje-llama2-7b-hf-GGUF --include "araproje-llama2-7b-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ibranze_araproje-llama2-7b-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/pharaouk_untitled-7B-GGUF | tensorblock | 2025-06-19T01:58:34Z | 14 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:pharaouk/untitled-7B",
"base_model:quantized:pharaouk/untitled-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T00:16:47Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: pharaouk/untitled-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## pharaouk/untitled-7B - GGUF
This repo contains GGUF format model files for [pharaouk/untitled-7B](https://huggingface.co/pharaouk/untitled-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [untitled-7B-Q2_K.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [untitled-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [untitled-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [untitled-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [untitled-7B-Q4_0.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [untitled-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [untitled-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [untitled-7B-Q5_0.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [untitled-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [untitled-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [untitled-7B-Q6_K.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [untitled-7B-Q8_0.gguf](https://huggingface.co/tensorblock/pharaouk_untitled-7B-GGUF/blob/main/untitled-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/pharaouk_untitled-7B-GGUF --include "untitled-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/pharaouk_untitled-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF | tensorblock | 2025-06-19T01:58:19Z | 26 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:garage-bAInd/Open-Platypus",
"base_model:Weyaxi/Dolphin-Nebula-7B",
"base_model:quantized:Weyaxi/Dolphin-Nebula-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T19:55:19Z | ---
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
tags:
- TensorBlock
- GGUF
base_model: Weyaxi/Dolphin-Nebula-7B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Weyaxi/Dolphin-Nebula-7B - GGUF
This repo contains GGUF format model files for [Weyaxi/Dolphin-Nebula-7B](https://huggingface.co/Weyaxi/Dolphin-Nebula-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Dolphin-Nebula-7B-Q2_K.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Dolphin-Nebula-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Dolphin-Nebula-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Dolphin-Nebula-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Dolphin-Nebula-7B-Q4_0.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Dolphin-Nebula-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Dolphin-Nebula-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Dolphin-Nebula-7B-Q5_0.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Dolphin-Nebula-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Dolphin-Nebula-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Dolphin-Nebula-7B-Q6_K.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Dolphin-Nebula-7B-Q8_0.gguf](https://huggingface.co/tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF/blob/main/Dolphin-Nebula-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF --include "Dolphin-Nebula-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Weyaxi_Dolphin-Nebula-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
techlab-khc/laboratory-brca | techlab-khc | 2025-06-19T01:57:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T01:57:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF | tensorblock | 2025-06-19T01:57:54Z | 39 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Abe13/Full-juni-Mistral-7B-OpenOrca",
"base_model:quantized:Abe13/Full-juni-Mistral-7B-OpenOrca",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T12:49:35Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: Abe13/Full-juni-Mistral-7B-OpenOrca
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Abe13/Full-juni-Mistral-7B-OpenOrca - GGUF
This repo contains GGUF format model files for [Abe13/Full-juni-Mistral-7B-OpenOrca](https://huggingface.co/Abe13/Full-juni-Mistral-7B-OpenOrca).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Full-juni-Mistral-7B-OpenOrca-Q2_K.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Full-juni-Mistral-7B-OpenOrca-Q3_K_S.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Full-juni-Mistral-7B-OpenOrca-Q3_K_M.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Full-juni-Mistral-7B-OpenOrca-Q3_K_L.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Full-juni-Mistral-7B-OpenOrca-Q4_0.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Full-juni-Mistral-7B-OpenOrca-Q4_K_S.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Full-juni-Mistral-7B-OpenOrca-Q4_K_M.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Full-juni-Mistral-7B-OpenOrca-Q5_0.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Full-juni-Mistral-7B-OpenOrca-Q5_K_S.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Full-juni-Mistral-7B-OpenOrca-Q5_K_M.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Full-juni-Mistral-7B-OpenOrca-Q6_K.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Full-juni-Mistral-7B-OpenOrca-Q8_0.gguf](https://huggingface.co/tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF/blob/main/Full-juni-Mistral-7B-OpenOrca-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF --include "Full-juni-Mistral-7B-OpenOrca-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Abe13_Full-juni-Mistral-7B-OpenOrca-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF | tensorblock | 2025-06-19T01:57:38Z | 17 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"ms",
"base_model:mesolitica/llama-1b-hf-32768-fpf",
"base_model:quantized:mesolitica/llama-1b-hf-32768-fpf",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T08:17:45Z | ---
language:
- ms
tags:
- TensorBlock
- GGUF
base_model: mesolitica/llama-1b-hf-32768-fpf
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mesolitica/llama-1b-hf-32768-fpf - GGUF
This repo contains GGUF format model files for [mesolitica/llama-1b-hf-32768-fpf](https://huggingface.co/mesolitica/llama-1b-hf-32768-fpf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-1b-hf-32768-fpf-Q2_K.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q2_K.gguf) | Q2_K | 0.449 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-1b-hf-32768-fpf-Q3_K_S.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q3_K_S.gguf) | Q3_K_S | 0.513 GB | very small, high quality loss |
| [llama-1b-hf-32768-fpf-Q3_K_M.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q3_K_M.gguf) | Q3_K_M | 0.559 GB | very small, high quality loss |
| [llama-1b-hf-32768-fpf-Q3_K_L.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q3_K_L.gguf) | Q3_K_L | 0.594 GB | small, substantial quality loss |
| [llama-1b-hf-32768-fpf-Q4_0.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-1b-hf-32768-fpf-Q4_K_S.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q4_K_S.gguf) | Q4_K_S | 0.646 GB | small, greater quality loss |
| [llama-1b-hf-32768-fpf-Q4_K_M.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q4_K_M.gguf) | Q4_K_M | 0.669 GB | medium, balanced quality - recommended |
| [llama-1b-hf-32768-fpf-Q5_0.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q5_0.gguf) | Q5_0 | 0.755 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-1b-hf-32768-fpf-Q5_K_S.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q5_K_S.gguf) | Q5_K_S | 0.755 GB | large, low quality loss - recommended |
| [llama-1b-hf-32768-fpf-Q5_K_M.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q5_K_M.gguf) | Q5_K_M | 0.771 GB | large, very low quality loss - recommended |
| [llama-1b-hf-32768-fpf-Q6_K.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q6_K.gguf) | Q6_K | 0.880 GB | very large, extremely low quality loss |
| [llama-1b-hf-32768-fpf-Q8_0.gguf](https://huggingface.co/tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF/blob/main/llama-1b-hf-32768-fpf-Q8_0.gguf) | Q8_0 | 1.139 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF --include "llama-1b-hf-32768-fpf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mesolitica_llama-1b-hf-32768-fpf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF | tensorblock | 2025-06-19T01:57:33Z | 23 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down",
"base_model:quantized:CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T07:02:45Z | ---
base_model: CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down - GGUF
This repo contains GGUF format model files for [CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q2_K.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q3_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q3_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q3_K_L.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q4_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q4_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q4_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q5_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q5_K_S.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q5_K_M.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q6_K.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q8_0.gguf](https://huggingface.co/tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF/blob/main/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF --include "llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CHIH-HUNG_llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF | tensorblock | 2025-06-19T01:57:23Z | 21 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:LTC-AI-Labs/L2-7b-Hermes-WVG-Test",
"base_model:quantized:LTC-AI-Labs/L2-7b-Hermes-WVG-Test",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T03:02:17Z | ---
base_model: LTC-AI-Labs/L2-7b-Hermes-WVG-Test
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## LTC-AI-Labs/L2-7b-Hermes-WVG-Test - GGUF
This repo contains GGUF format model files for [LTC-AI-Labs/L2-7b-Hermes-WVG-Test](https://huggingface.co/LTC-AI-Labs/L2-7b-Hermes-WVG-Test).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [L2-7b-Hermes-WVG-Test-Q2_K.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [L2-7b-Hermes-WVG-Test-Q3_K_S.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [L2-7b-Hermes-WVG-Test-Q3_K_M.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [L2-7b-Hermes-WVG-Test-Q3_K_L.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [L2-7b-Hermes-WVG-Test-Q4_0.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [L2-7b-Hermes-WVG-Test-Q4_K_S.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [L2-7b-Hermes-WVG-Test-Q4_K_M.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [L2-7b-Hermes-WVG-Test-Q5_0.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [L2-7b-Hermes-WVG-Test-Q5_K_S.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [L2-7b-Hermes-WVG-Test-Q5_K_M.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [L2-7b-Hermes-WVG-Test-Q6_K.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [L2-7b-Hermes-WVG-Test-Q8_0.gguf](https://huggingface.co/tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF/blob/main/L2-7b-Hermes-WVG-Test-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF --include "L2-7b-Hermes-WVG-Test-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/LTC-AI-Labs_L2-7b-Hermes-WVG-Test-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/defog_sqlcoder-7b-GGUF | tensorblock | 2025-06-19T01:56:58Z | 68 | 0 | null | [
"gguf",
"code",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:defog/sqlcoder-7b",
"base_model:quantized:defog/sqlcoder-7b",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T19:09:13Z | ---
license: cc-by-sa-4.0
language:
- en
pipeline_tag: text-generation
tags:
- code
- TensorBlock
- GGUF
base_model: defog/sqlcoder-7b
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## defog/sqlcoder-7b - GGUF
This repo contains GGUF format model files for [defog/sqlcoder-7b](https://huggingface.co/defog/sqlcoder-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [sqlcoder-7b-Q2_K.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [sqlcoder-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [sqlcoder-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [sqlcoder-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [sqlcoder-7b-Q4_0.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sqlcoder-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [sqlcoder-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [sqlcoder-7b-Q5_0.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sqlcoder-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [sqlcoder-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [sqlcoder-7b-Q6_K.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [sqlcoder-7b-Q8_0.gguf](https://huggingface.co/tensorblock/defog_sqlcoder-7b-GGUF/blob/main/sqlcoder-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/defog_sqlcoder-7b-GGUF --include "sqlcoder-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/defog_sqlcoder-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF | tensorblock | 2025-06-19T01:56:53Z | 28 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:llmware/bling-sheared-llama-1.3b-0.1",
"base_model:quantized:llmware/bling-sheared-llama-1.3b-0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T18:29:28Z | ---
license: apache-2.0
inference: false
tags:
- TensorBlock
- GGUF
base_model: llmware/bling-sheared-llama-1.3b-0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## llmware/bling-sheared-llama-1.3b-0.1 - GGUF
This repo contains GGUF format model files for [llmware/bling-sheared-llama-1.3b-0.1](https://huggingface.co/llmware/bling-sheared-llama-1.3b-0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [bling-sheared-llama-1.3b-0.1-Q2_K.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q2_K.gguf) | Q2_K | 0.559 GB | smallest, significant quality loss - not recommended for most purposes |
| [bling-sheared-llama-1.3b-0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q3_K_S.gguf) | Q3_K_S | 0.641 GB | very small, high quality loss |
| [bling-sheared-llama-1.3b-0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q3_K_M.gguf) | Q3_K_M | 0.703 GB | very small, high quality loss |
| [bling-sheared-llama-1.3b-0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q3_K_L.gguf) | Q3_K_L | 0.743 GB | small, substantial quality loss |
| [bling-sheared-llama-1.3b-0.1-Q4_0.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q4_0.gguf) | Q4_0 | 0.775 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [bling-sheared-llama-1.3b-0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q4_K_S.gguf) | Q4_K_S | 0.813 GB | small, greater quality loss |
| [bling-sheared-llama-1.3b-0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q4_K_M.gguf) | Q4_K_M | 0.872 GB | medium, balanced quality - recommended |
| [bling-sheared-llama-1.3b-0.1-Q5_0.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q5_0.gguf) | Q5_0 | 0.935 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [bling-sheared-llama-1.3b-0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q5_K_S.gguf) | Q5_K_S | 0.952 GB | large, low quality loss - recommended |
| [bling-sheared-llama-1.3b-0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q5_K_M.gguf) | Q5_K_M | 1.001 GB | large, very low quality loss - recommended |
| [bling-sheared-llama-1.3b-0.1-Q6_K.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q6_K.gguf) | Q6_K | 1.170 GB | very large, extremely low quality loss |
| [bling-sheared-llama-1.3b-0.1-Q8_0.gguf](https://huggingface.co/tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF/blob/main/bling-sheared-llama-1.3b-0.1-Q8_0.gguf) | Q8_0 | 1.431 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF --include "bling-sheared-llama-1.3b-0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/llmware_bling-sheared-llama-1.3b-0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF | tensorblock | 2025-06-19T01:56:51Z | 18 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Lazycuber/L2-7b-Base-Guanaco-Vicuna",
"base_model:quantized:Lazycuber/L2-7b-Base-Guanaco-Vicuna",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T18:10:13Z | ---
base_model: Lazycuber/L2-7b-Base-Guanaco-Vicuna
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Lazycuber/L2-7b-Base-Guanaco-Vicuna - GGUF
This repo contains GGUF format model files for [Lazycuber/L2-7b-Base-Guanaco-Vicuna](https://huggingface.co/Lazycuber/L2-7b-Base-Guanaco-Vicuna).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [L2-7b-Base-Guanaco-Vicuna-Q2_K.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [L2-7b-Base-Guanaco-Vicuna-Q3_K_S.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [L2-7b-Base-Guanaco-Vicuna-Q3_K_M.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [L2-7b-Base-Guanaco-Vicuna-Q3_K_L.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [L2-7b-Base-Guanaco-Vicuna-Q4_0.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [L2-7b-Base-Guanaco-Vicuna-Q4_K_S.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [L2-7b-Base-Guanaco-Vicuna-Q4_K_M.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [L2-7b-Base-Guanaco-Vicuna-Q5_0.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [L2-7b-Base-Guanaco-Vicuna-Q5_K_S.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [L2-7b-Base-Guanaco-Vicuna-Q5_K_M.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [L2-7b-Base-Guanaco-Vicuna-Q6_K.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [L2-7b-Base-Guanaco-Vicuna-Q8_0.gguf](https://huggingface.co/tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF/blob/main/L2-7b-Base-Guanaco-Vicuna-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF --include "L2-7b-Base-Guanaco-Vicuna-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Lazycuber_L2-7b-Base-Guanaco-Vicuna-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF | tensorblock | 2025-06-19T01:56:50Z | 42 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"base_model:kyujinpy/Kosy-platypus2-13B-v4",
"base_model:quantized:kyujinpy/Kosy-platypus2-13B-v4",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T17:30:51Z | ---
language:
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
tags:
- TensorBlock
- GGUF
base_model: kyujinpy/Kosy-platypus2-13B-v4
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## kyujinpy/Kosy-platypus2-13B-v4 - GGUF
This repo contains GGUF format model files for [kyujinpy/Kosy-platypus2-13B-v4](https://huggingface.co/kyujinpy/Kosy-platypus2-13B-v4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Kosy-platypus2-13B-v4-Q2_K.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [Kosy-platypus2-13B-v4-Q3_K_S.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [Kosy-platypus2-13B-v4-Q3_K_M.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [Kosy-platypus2-13B-v4-Q3_K_L.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [Kosy-platypus2-13B-v4-Q4_0.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Kosy-platypus2-13B-v4-Q4_K_S.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [Kosy-platypus2-13B-v4-Q4_K_M.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [Kosy-platypus2-13B-v4-Q5_0.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Kosy-platypus2-13B-v4-Q5_K_S.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [Kosy-platypus2-13B-v4-Q5_K_M.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [Kosy-platypus2-13B-v4-Q6_K.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [Kosy-platypus2-13B-v4-Q8_0.gguf](https://huggingface.co/tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF/blob/main/Kosy-platypus2-13B-v4-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF --include "Kosy-platypus2-13B-v4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/kyujinpy_Kosy-platypus2-13B-v4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF | tensorblock | 2025-06-19T01:56:41Z | 78 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"base_model:TinyLlama/tinyLlama-intermediate-checkpoints",
"base_model:quantized:TinyLlama/tinyLlama-intermediate-checkpoints",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T15:09:37Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
tags:
- TensorBlock
- GGUF
base_model: TinyLlama/tinyLlama-intermediate-checkpoints
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## TinyLlama/tinyLlama-intermediate-checkpoints - GGUF
This repo contains GGUF format model files for [TinyLlama/tinyLlama-intermediate-checkpoints](https://huggingface.co/TinyLlama/tinyLlama-intermediate-checkpoints).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [tinyLlama-intermediate-checkpoints-Q2_K.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q2_K.gguf) | Q2_K | 0.001 GB | smallest, significant quality loss - not recommended for most purposes |
| [tinyLlama-intermediate-checkpoints-Q3_K_S.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q3_K_S.gguf) | Q3_K_S | 0.001 GB | very small, high quality loss |
| [tinyLlama-intermediate-checkpoints-Q3_K_M.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q3_K_M.gguf) | Q3_K_M | 0.001 GB | very small, high quality loss |
| [tinyLlama-intermediate-checkpoints-Q3_K_L.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q3_K_L.gguf) | Q3_K_L | 0.001 GB | small, substantial quality loss |
| [tinyLlama-intermediate-checkpoints-Q4_0.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q4_0.gguf) | Q4_0 | 0.001 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tinyLlama-intermediate-checkpoints-Q4_K_S.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q4_K_S.gguf) | Q4_K_S | 0.001 GB | small, greater quality loss |
| [tinyLlama-intermediate-checkpoints-Q4_K_M.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q4_K_M.gguf) | Q4_K_M | 0.001 GB | medium, balanced quality - recommended |
| [tinyLlama-intermediate-checkpoints-Q5_0.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q5_0.gguf) | Q5_0 | 0.001 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tinyLlama-intermediate-checkpoints-Q5_K_S.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q5_K_S.gguf) | Q5_K_S | 0.001 GB | large, low quality loss - recommended |
| [tinyLlama-intermediate-checkpoints-Q5_K_M.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q5_K_M.gguf) | Q5_K_M | 0.001 GB | large, very low quality loss - recommended |
| [tinyLlama-intermediate-checkpoints-Q6_K.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q6_K.gguf) | Q6_K | 0.001 GB | very large, extremely low quality loss |
| [tinyLlama-intermediate-checkpoints-Q8_0.gguf](https://huggingface.co/tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF/blob/main/tinyLlama-intermediate-checkpoints-Q8_0.gguf) | Q8_0 | 0.001 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF --include "tinyLlama-intermediate-checkpoints-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/TinyLlama_tinyLlama-intermediate-checkpoints-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF | tensorblock | 2025-06-19T01:56:29Z | 140 | 0 | null | [
"gguf",
"Mistral",
"finetune",
"chatml",
"DPO",
"German",
"Deutsch",
"synthetic data",
"TensorBlock",
"GGUF",
"de",
"en",
"base_model:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:quantized:DiscoResearch/DiscoLM_German_7b_v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T12:40:46Z | ---
base_model: DiscoResearch/DiscoLM_German_7b_v1
tags:
- Mistral
- finetune
- chatml
- DPO
- German
- Deutsch
- synthetic data
- TensorBlock
- GGUF
license: apache-2.0
language:
- de
- en
model-index:
- name: DiscoLM_German_7b_v1
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## DiscoResearch/DiscoLM_German_7b_v1 - GGUF
This repo contains GGUF format model files for [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [DiscoLM_German_7b_v1-Q2_K.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [DiscoLM_German_7b_v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [DiscoLM_German_7b_v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [DiscoLM_German_7b_v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [DiscoLM_German_7b_v1-Q4_0.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [DiscoLM_German_7b_v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [DiscoLM_German_7b_v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [DiscoLM_German_7b_v1-Q5_0.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [DiscoLM_German_7b_v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [DiscoLM_German_7b_v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [DiscoLM_German_7b_v1-Q6_K.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [DiscoLM_German_7b_v1-Q8_0.gguf](https://huggingface.co/tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF/blob/main/DiscoLM_German_7b_v1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF --include "DiscoLM_German_7b_v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/DiscoResearch_DiscoLM_German_7b_v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF | tensorblock | 2025-06-19T01:56:22Z | 61 | 0 | transformers | [
"transformers",
"gguf",
"juanako",
"UNA",
"TensorBlock",
"GGUF",
"dataset:fblgit/tree-of-knowledge",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:fblgit/una-cybertron-7b-v1-fp16",
"base_model:quantized:fblgit/una-cybertron-7b-v1-fp16",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T11:39:02Z | ---
license: apache-2.0
library_name: transformers
tags:
- juanako
- UNA
- TensorBlock
- GGUF
datasets:
- fblgit/tree-of-knowledge
- Open-Orca/SlimOrca-Dedup
- HuggingFaceH4/ultrafeedback_binarized
base_model: fblgit/una-cybertron-7b-v1-fp16
model-index:
- name: una-cybertron-7b-v1-fp16
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.43
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-cybertron-7b-v1-fp16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-cybertron-7b-v1-fp16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-cybertron-7b-v1-fp16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.28
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-cybertron-7b-v1-fp16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-cybertron-7b-v1-fp16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-cybertron-7b-v1-fp16
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## fblgit/una-cybertron-7b-v1-fp16 - GGUF
This repo contains GGUF format model files for [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [una-cybertron-7b-v1-fp16-Q2_K.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [una-cybertron-7b-v1-fp16-Q3_K_S.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [una-cybertron-7b-v1-fp16-Q3_K_M.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [una-cybertron-7b-v1-fp16-Q3_K_L.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [una-cybertron-7b-v1-fp16-Q4_0.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [una-cybertron-7b-v1-fp16-Q4_K_S.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [una-cybertron-7b-v1-fp16-Q4_K_M.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [una-cybertron-7b-v1-fp16-Q5_0.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [una-cybertron-7b-v1-fp16-Q5_K_S.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [una-cybertron-7b-v1-fp16-Q5_K_M.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [una-cybertron-7b-v1-fp16-Q6_K.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [una-cybertron-7b-v1-fp16-Q8_0.gguf](https://huggingface.co/tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF/blob/main/una-cybertron-7b-v1-fp16-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF --include "una-cybertron-7b-v1-fp16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/fblgit_una-cybertron-7b-v1-fp16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/migtissera_Tess-7B-v1.4-GGUF | tensorblock | 2025-06-19T01:55:48Z | 16 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:migtissera/Tess-7B-v1.4",
"base_model:quantized:migtissera/Tess-7B-v1.4",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T00:12:05Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: migtissera/Tess-7B-v1.4
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## migtissera/Tess-7B-v1.4 - GGUF
This repo contains GGUF format model files for [migtissera/Tess-7B-v1.4](https://huggingface.co/migtissera/Tess-7B-v1.4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Tess-7B-v1.4-Q2_K.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Tess-7B-v1.4-Q3_K_S.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Tess-7B-v1.4-Q3_K_M.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Tess-7B-v1.4-Q3_K_L.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Tess-7B-v1.4-Q4_0.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Tess-7B-v1.4-Q4_K_S.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Tess-7B-v1.4-Q4_K_M.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Tess-7B-v1.4-Q5_0.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Tess-7B-v1.4-Q5_K_S.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Tess-7B-v1.4-Q5_K_M.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Tess-7B-v1.4-Q6_K.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Tess-7B-v1.4-Q8_0.gguf](https://huggingface.co/tensorblock/migtissera_Tess-7B-v1.4-GGUF/blob/main/Tess-7B-v1.4-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/migtissera_Tess-7B-v1.4-GGUF --include "Tess-7B-v1.4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/migtissera_Tess-7B-v1.4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF | tensorblock | 2025-06-19T01:55:46Z | 127 | 0 | transformers | [
"transformers",
"gguf",
"llm",
"7b",
"TensorBlock",
"GGUF",
"en",
"dataset:jondurbin/truthy-dpo-v0.1",
"base_model:bardsai/jaskier-7b-dpo-v6.1",
"base_model:quantized:bardsai/jaskier-7b-dpo-v6.1",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T23:14:32Z | ---
library_name: transformers
tags:
- llm
- 7b
- TensorBlock
- GGUF
license: cc-by-4.0
datasets:
- jondurbin/truthy-dpo-v0.1
language:
- en
base_model: bardsai/jaskier-7b-dpo-v6.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## bardsai/jaskier-7b-dpo-v6.1 - GGUF
This repo contains GGUF format model files for [bardsai/jaskier-7b-dpo-v6.1](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [jaskier-7b-dpo-v6.1-Q2_K.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [jaskier-7b-dpo-v6.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [jaskier-7b-dpo-v6.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [jaskier-7b-dpo-v6.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [jaskier-7b-dpo-v6.1-Q4_0.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [jaskier-7b-dpo-v6.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [jaskier-7b-dpo-v6.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [jaskier-7b-dpo-v6.1-Q5_0.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [jaskier-7b-dpo-v6.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [jaskier-7b-dpo-v6.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [jaskier-7b-dpo-v6.1-Q6_K.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [jaskier-7b-dpo-v6.1-Q8_0.gguf](https://huggingface.co/tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF/blob/main/jaskier-7b-dpo-v6.1-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF --include "jaskier-7b-dpo-v6.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/bardsai_jaskier-7b-dpo-v6.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/M4-ai_TinyMistral-248M-v3-GGUF | tensorblock | 2025-06-19T01:55:29Z | 140 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:Locutusque/TM-DATA-V2",
"dataset:LLM360/TxT360",
"dataset:mlfoundations/dclm-baseline-1.0",
"dataset:Skylion007/openwebtext",
"dataset:JeanKaddour/minipile",
"dataset:eminorhan/gutenberg_en",
"base_model:M4-ai/TinyMistral-248M-v3",
"base_model:quantized:M4-ai/TinyMistral-248M-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T19:25:21Z | ---
language:
- en
license: apache-2.0
datasets:
- Locutusque/TM-DATA-V2
- LLM360/TxT360
- mlfoundations/dclm-baseline-1.0
- Skylion007/openwebtext
- JeanKaddour/minipile
- eminorhan/gutenberg_en
tags:
- TensorBlock
- GGUF
base_model: M4-ai/TinyMistral-248M-v3
model-index:
- name: TinyMistral-248M-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 16.39
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 1.78
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 0.0
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.15
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.47
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## M4-ai/TinyMistral-248M-v3 - GGUF
This repo contains GGUF format model files for [M4-ai/TinyMistral-248M-v3](https://huggingface.co/M4-ai/TinyMistral-248M-v3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TinyMistral-248M-v3-Q2_K.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q2_K.gguf) | Q2_K | 0.105 GB | smallest, significant quality loss - not recommended for most purposes |
| [TinyMistral-248M-v3-Q3_K_S.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q3_K_S.gguf) | Q3_K_S | 0.120 GB | very small, high quality loss |
| [TinyMistral-248M-v3-Q3_K_M.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q3_K_M.gguf) | Q3_K_M | 0.129 GB | very small, high quality loss |
| [TinyMistral-248M-v3-Q3_K_L.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q3_K_L.gguf) | Q3_K_L | 0.137 GB | small, substantial quality loss |
| [TinyMistral-248M-v3-Q4_0.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q4_0.gguf) | Q4_0 | 0.149 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TinyMistral-248M-v3-Q4_K_S.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q4_K_S.gguf) | Q4_K_S | 0.149 GB | small, greater quality loss |
| [TinyMistral-248M-v3-Q4_K_M.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q4_K_M.gguf) | Q4_K_M | 0.156 GB | medium, balanced quality - recommended |
| [TinyMistral-248M-v3-Q5_0.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q5_0.gguf) | Q5_0 | 0.176 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TinyMistral-248M-v3-Q5_K_S.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q5_K_S.gguf) | Q5_K_S | 0.176 GB | large, low quality loss - recommended |
| [TinyMistral-248M-v3-Q5_K_M.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q5_K_M.gguf) | Q5_K_M | 0.179 GB | large, very low quality loss - recommended |
| [TinyMistral-248M-v3-Q6_K.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q6_K.gguf) | Q6_K | 0.204 GB | very large, extremely low quality loss |
| [TinyMistral-248M-v3-Q8_0.gguf](https://huggingface.co/tensorblock/M4-ai_TinyMistral-248M-v3-GGUF/blob/main/TinyMistral-248M-v3-Q8_0.gguf) | Q8_0 | 0.264 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/M4-ai_TinyMistral-248M-v3-GGUF --include "TinyMistral-248M-v3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/M4-ai_TinyMistral-248M-v3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF | tensorblock | 2025-06-19T01:55:23Z | 57 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"base_model:Unbabel/TowerInstruct-13B-v0.1",
"base_model:quantized:Unbabel/TowerInstruct-13B-v0.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | translation | 2025-04-30T17:40:22Z | ---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
tags:
- TensorBlock
- GGUF
base_model: Unbabel/TowerInstruct-13B-v0.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Unbabel/TowerInstruct-13B-v0.1 - GGUF
This repo contains GGUF format model files for [Unbabel/TowerInstruct-13B-v0.1](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [TowerInstruct-13B-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [TowerInstruct-13B-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [TowerInstruct-13B-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [TowerInstruct-13B-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [TowerInstruct-13B-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [TowerInstruct-13B-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [TowerInstruct-13B-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [TowerInstruct-13B-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [TowerInstruct-13B-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [TowerInstruct-13B-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [TowerInstruct-13B-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [TowerInstruct-13B-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF/blob/main/TowerInstruct-13B-v0.1-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF --include "TowerInstruct-13B-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Unbabel_TowerInstruct-13B-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mlabonne_NeuralDarewin-7B-GGUF | tensorblock | 2025-06-19T01:55:14Z | 13 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:mlabonne/NeuralDarewin-7B",
"base_model:quantized:mlabonne/NeuralDarewin-7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T16:38:57Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: mlabonne/NeuralDarewin-7B
model-index:
- name: NeuralDarewin-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.92
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlabonne/NeuralDarewin-7B - GGUF
This repo contains GGUF format model files for [mlabonne/NeuralDarewin-7B](https://huggingface.co/mlabonne/NeuralDarewin-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NeuralDarewin-7B-Q2_K.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [NeuralDarewin-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [NeuralDarewin-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [NeuralDarewin-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [NeuralDarewin-7B-Q4_0.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NeuralDarewin-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [NeuralDarewin-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [NeuralDarewin-7B-Q5_0.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NeuralDarewin-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [NeuralDarewin-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [NeuralDarewin-7B-Q6_K.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [NeuralDarewin-7B-Q8_0.gguf](https://huggingface.co/tensorblock/mlabonne_NeuralDarewin-7B-GGUF/blob/main/NeuralDarewin-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlabonne_NeuralDarewin-7B-GGUF --include "NeuralDarewin-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlabonne_NeuralDarewin-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/lex-hue_Delexa-V0.1-7b-GGUF | tensorblock | 2025-06-19T01:54:43Z | 13 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:lex-hue/Delexa-V0.1-7b",
"base_model:quantized:lex-hue/Delexa-V0.1-7b",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T08:12:36Z | ---
license: apache-2.0
tags:
- TensorBlock
- GGUF
base_model: lex-hue/Delexa-V0.1-7b
model-index:
- name: Delexa-V0.1-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.69
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## lex-hue/Delexa-V0.1-7b - GGUF
This repo contains GGUF format model files for [lex-hue/Delexa-V0.1-7b](https://huggingface.co/lex-hue/Delexa-V0.1-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Delexa-V0.1-7b-Q2_K.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Delexa-V0.1-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Delexa-V0.1-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Delexa-V0.1-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Delexa-V0.1-7b-Q4_0.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Delexa-V0.1-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Delexa-V0.1-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Delexa-V0.1-7b-Q5_0.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Delexa-V0.1-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Delexa-V0.1-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Delexa-V0.1-7b-Q6_K.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Delexa-V0.1-7b-Q8_0.gguf](https://huggingface.co/tensorblock/lex-hue_Delexa-V0.1-7b-GGUF/blob/main/Delexa-V0.1-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/lex-hue_Delexa-V0.1-7b-GGUF --include "Delexa-V0.1-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/lex-hue_Delexa-V0.1-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF | tensorblock | 2025-06-19T01:54:06Z | 21 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"axolotl",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:winglian/Llama-3-8b-64k-PoSE",
"base_model:quantized:winglian/Llama-3-8b-64k-PoSE",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T00:14:37Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- axolotl
- TensorBlock
- GGUF
base_model: winglian/Llama-3-8b-64k-PoSE
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## winglian/Llama-3-8b-64k-PoSE - GGUF
This repo contains GGUF format model files for [winglian/Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8b-64k-PoSE-Q2_K.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-8b-64k-PoSE-Q3_K_S.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [Llama-3-8b-64k-PoSE-Q3_K_M.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3-8b-64k-PoSE-Q3_K_L.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3-8b-64k-PoSE-Q4_0.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-8b-64k-PoSE-Q4_K_S.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3-8b-64k-PoSE-Q4_K_M.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3-8b-64k-PoSE-Q5_0.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-8b-64k-PoSE-Q5_K_S.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3-8b-64k-PoSE-Q5_K_M.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3-8b-64k-PoSE-Q6_K.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3-8b-64k-PoSE-Q8_0.gguf](https://huggingface.co/tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF/blob/main/Llama-3-8b-64k-PoSE-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF --include "Llama-3-8b-64k-PoSE-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/winglian_Llama-3-8b-64k-PoSE-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF | tensorblock | 2025-06-19T01:54:04Z | 140 | 0 | null | [
"gguf",
"instruction-finetuning",
"TensorBlock",
"GGUF",
"en",
"base_model:IAAR-Shanghai/xFinder-qwen1505",
"base_model:quantized:IAAR-Shanghai/xFinder-qwen1505",
"license:cc-by-nc-nd-4.0",
"region:us",
"conversational"
] | null | 2025-04-30T00:03:38Z | ---
inference: false
language:
- en
tags:
- instruction-finetuning
- TensorBlock
- GGUF
task_categories:
- text-generation
license: cc-by-nc-nd-4.0
base_model: IAAR-Shanghai/xFinder-qwen1505
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## IAAR-Shanghai/xFinder-qwen1505 - GGUF
This repo contains GGUF format model files for [IAAR-Shanghai/xFinder-qwen1505](https://huggingface.co/IAAR-Shanghai/xFinder-qwen1505).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [xFinder-qwen1505-Q2_K.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q2_K.gguf) | Q2_K | 0.298 GB | smallest, significant quality loss - not recommended for most purposes |
| [xFinder-qwen1505-Q3_K_S.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q3_K_S.gguf) | Q3_K_S | 0.333 GB | very small, high quality loss |
| [xFinder-qwen1505-Q3_K_M.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q3_K_M.gguf) | Q3_K_M | 0.350 GB | very small, high quality loss |
| [xFinder-qwen1505-Q3_K_L.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q3_K_L.gguf) | Q3_K_L | 0.364 GB | small, substantial quality loss |
| [xFinder-qwen1505-Q4_0.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q4_0.gguf) | Q4_0 | 0.395 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [xFinder-qwen1505-Q4_K_S.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q4_K_S.gguf) | Q4_K_S | 0.397 GB | small, greater quality loss |
| [xFinder-qwen1505-Q4_K_M.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q4_K_M.gguf) | Q4_K_M | 0.407 GB | medium, balanced quality - recommended |
| [xFinder-qwen1505-Q5_0.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q5_0.gguf) | Q5_0 | 0.453 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [xFinder-qwen1505-Q5_K_S.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q5_K_S.gguf) | Q5_K_S | 0.453 GB | large, low quality loss - recommended |
| [xFinder-qwen1505-Q5_K_M.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q5_K_M.gguf) | Q5_K_M | 0.459 GB | large, very low quality loss - recommended |
| [xFinder-qwen1505-Q6_K.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q6_K.gguf) | Q6_K | 0.515 GB | very large, extremely low quality loss |
| [xFinder-qwen1505-Q8_0.gguf](https://huggingface.co/tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF/blob/main/xFinder-qwen1505-Q8_0.gguf) | Q8_0 | 0.665 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF --include "xFinder-qwen1505-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/IAAR-Shanghai_xFinder-qwen1505-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF | tensorblock | 2025-06-19T01:53:22Z | 49 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:saishf/Aura-Uncensored-OAS-8B-L3",
"base_model:quantized:saishf/Aura-Uncensored-OAS-8B-L3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T16:39:07Z | ---
license: cc-by-nc-4.0
base_model: saishf/Aura-Uncensored-OAS-8B-L3
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## saishf/Aura-Uncensored-OAS-8B-L3 - GGUF
This repo contains GGUF format model files for [saishf/Aura-Uncensored-OAS-8B-L3](https://huggingface.co/saishf/Aura-Uncensored-OAS-8B-L3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Aura-Uncensored-OAS-8B-L3-Q2_K.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Aura-Uncensored-OAS-8B-L3-Q3_K_S.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [Aura-Uncensored-OAS-8B-L3-Q3_K_M.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Aura-Uncensored-OAS-8B-L3-Q3_K_L.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Aura-Uncensored-OAS-8B-L3-Q4_0.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Aura-Uncensored-OAS-8B-L3-Q4_K_S.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Aura-Uncensored-OAS-8B-L3-Q4_K_M.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Aura-Uncensored-OAS-8B-L3-Q5_0.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Aura-Uncensored-OAS-8B-L3-Q5_K_S.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Aura-Uncensored-OAS-8B-L3-Q5_K_M.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Aura-Uncensored-OAS-8B-L3-Q6_K.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Aura-Uncensored-OAS-8B-L3-Q8_0.gguf](https://huggingface.co/tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF/blob/main/Aura-Uncensored-OAS-8B-L3-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF --include "Aura-Uncensored-OAS-8B-L3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/saishf_Aura-Uncensored-OAS-8B-L3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF | tensorblock | 2025-06-19T01:52:47Z | 15 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"NousResearch/Meta-Llama-3-8B-Instruct",
"elinas/Llama-3-8B-Ultra-Instruct",
"mlabonne/ChimeraLlama-3-8B-v3",
"nvidia/Llama3-ChatQA-1.5-8B",
"Kukedlc/SmartLlama-3-8B-MS-v0.1",
"TensorBlock",
"GGUF",
"base_model:Kukedlc/NeuralMiLLaMa-8B-slerp",
"base_model:quantized:Kukedlc/NeuralMiLLaMa-8B-slerp",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T10:28:59Z | ---
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Meta-Llama-3-8B-Instruct
- elinas/Llama-3-8B-Ultra-Instruct
- mlabonne/ChimeraLlama-3-8B-v3
- nvidia/Llama3-ChatQA-1.5-8B
- Kukedlc/SmartLlama-3-8B-MS-v0.1
- TensorBlock
- GGUF
base_model: Kukedlc/NeuralMiLLaMa-8B-slerp
license: other
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Kukedlc/NeuralMiLLaMa-8B-slerp - GGUF
This repo contains GGUF format model files for [Kukedlc/NeuralMiLLaMa-8B-slerp](https://huggingface.co/Kukedlc/NeuralMiLLaMa-8B-slerp).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NeuralMiLLaMa-8B-slerp-Q2_K.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [NeuralMiLLaMa-8B-slerp-Q3_K_S.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [NeuralMiLLaMa-8B-slerp-Q3_K_M.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [NeuralMiLLaMa-8B-slerp-Q3_K_L.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [NeuralMiLLaMa-8B-slerp-Q4_0.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NeuralMiLLaMa-8B-slerp-Q4_K_S.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [NeuralMiLLaMa-8B-slerp-Q4_K_M.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [NeuralMiLLaMa-8B-slerp-Q5_0.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NeuralMiLLaMa-8B-slerp-Q5_K_S.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [NeuralMiLLaMa-8B-slerp-Q5_K_M.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [NeuralMiLLaMa-8B-slerp-Q6_K.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [NeuralMiLLaMa-8B-slerp-Q8_0.gguf](https://huggingface.co/tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF/blob/main/NeuralMiLLaMa-8B-slerp-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF --include "NeuralMiLLaMa-8B-slerp-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Kukedlc_NeuralMiLLaMa-8B-slerp-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/LLM360_K2-GGUF | tensorblock | 2025-06-19T01:52:34Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"nlp",
"llm",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:LLM360/K2",
"base_model:quantized:LLM360/K2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T05:45:01Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- nlp
- llm
- TensorBlock
- GGUF
base_model: LLM360/K2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## LLM360/K2 - GGUF
This repo contains GGUF format model files for [LLM360/K2](https://huggingface.co/LLM360/K2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [K2-Q2_K.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q2_K.gguf) | Q2_K | 24.113 GB | smallest, significant quality loss - not recommended for most purposes |
| [K2-Q3_K_S.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q3_K_S.gguf) | Q3_K_S | 28.161 GB | very small, high quality loss |
| [K2-Q3_K_M.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q3_K_M.gguf) | Q3_K_M | 31.632 GB | very small, high quality loss |
| [K2-Q3_K_L.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q3_K_L.gguf) | Q3_K_L | 34.649 GB | small, substantial quality loss |
| [K2-Q4_0.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q4_0.gguf) | Q4_0 | 36.796 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [K2-Q4_K_S.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q4_K_S.gguf) | Q4_K_S | 37.055 GB | small, greater quality loss |
| [K2-Q4_K_M.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q4_K_M.gguf) | Q4_K_M | 39.348 GB | medium, balanced quality - recommended |
| [K2-Q5_0.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q5_0.gguf) | Q5_0 | 44.924 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [K2-Q5_K_S.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q5_K_S.gguf) | Q5_K_S | 44.924 GB | large, low quality loss - recommended |
| [K2-Q5_K_M.gguf](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q5_K_M.gguf) | Q5_K_M | 46.239 GB | large, very low quality loss - recommended |
| [K2-Q6_K](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q6_K) | Q6_K | 53.560 GB | very large, extremely low quality loss |
| [K2-Q8_0](https://huggingface.co/tensorblock/LLM360_K2-GGUF/blob/main/K2-Q8_0) | Q8_0 | 69.371 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/LLM360_K2-GGUF --include "K2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/LLM360_K2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF | tensorblock | 2025-06-19T01:51:59Z | 34 | 0 | null | [
"gguf",
"trl",
"sft",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mnoukhov/pythia2.8b-sft-tldr",
"base_model:quantized:mnoukhov/pythia2.8b-sft-tldr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T01:19:07Z | ---
license: apache-2.0
base_model: mnoukhov/pythia2.8b-sft-tldr
tags:
- trl
- sft
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: pythia2.8b-sft-tldr
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mnoukhov/pythia2.8b-sft-tldr - GGUF
This repo contains GGUF format model files for [mnoukhov/pythia2.8b-sft-tldr](https://huggingface.co/mnoukhov/pythia2.8b-sft-tldr).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [pythia2.8b-sft-tldr-Q2_K.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q2_K.gguf) | Q2_K | 1.086 GB | smallest, significant quality loss - not recommended for most purposes |
| [pythia2.8b-sft-tldr-Q3_K_S.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q3_K_S.gguf) | Q3_K_S | 1.248 GB | very small, high quality loss |
| [pythia2.8b-sft-tldr-Q3_K_M.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q3_K_M.gguf) | Q3_K_M | 1.478 GB | very small, high quality loss |
| [pythia2.8b-sft-tldr-Q3_K_L.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q3_K_L.gguf) | Q3_K_L | 1.602 GB | small, substantial quality loss |
| [pythia2.8b-sft-tldr-Q4_0.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q4_0.gguf) | Q4_0 | 1.600 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [pythia2.8b-sft-tldr-Q4_K_S.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q4_K_S.gguf) | Q4_K_S | 1.613 GB | small, greater quality loss |
| [pythia2.8b-sft-tldr-Q4_K_M.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q4_K_M.gguf) | Q4_K_M | 1.787 GB | medium, balanced quality - recommended |
| [pythia2.8b-sft-tldr-Q5_0.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q5_0.gguf) | Q5_0 | 1.930 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [pythia2.8b-sft-tldr-Q5_K_S.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q5_K_S.gguf) | Q5_K_S | 1.930 GB | large, low quality loss - recommended |
| [pythia2.8b-sft-tldr-Q5_K_M.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q5_K_M.gguf) | Q5_K_M | 2.070 GB | large, very low quality loss - recommended |
| [pythia2.8b-sft-tldr-Q6_K.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q6_K.gguf) | Q6_K | 2.282 GB | very large, extremely low quality loss |
| [pythia2.8b-sft-tldr-Q8_0.gguf](https://huggingface.co/tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF/blob/main/pythia2.8b-sft-tldr-Q8_0.gguf) | Q8_0 | 2.954 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF --include "pythia2.8b-sft-tldr-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mnoukhov_pythia2.8b-sft-tldr-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF | tensorblock | 2025-06-19T01:51:06Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:jamesohe/Llama3-CAS-Audit8B-GCNI-V3",
"base_model:quantized:jamesohe/Llama3-CAS-Audit8B-GCNI-V3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T09:27:28Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: jamesohe/Llama3-CAS-Audit8B-GCNI-V3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## jamesohe/Llama3-CAS-Audit8B-GCNI-V3 - GGUF
This repo contains GGUF format model files for [jamesohe/Llama3-CAS-Audit8B-GCNI-V3](https://huggingface.co/jamesohe/Llama3-CAS-Audit8B-GCNI-V3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama3-CAS-Audit8B-GCNI-V3-Q2_K.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama3-CAS-Audit8B-GCNI-V3-Q3_K_S.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [Llama3-CAS-Audit8B-GCNI-V3-Q3_K_M.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama3-CAS-Audit8B-GCNI-V3-Q3_K_L.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama3-CAS-Audit8B-GCNI-V3-Q4_0.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama3-CAS-Audit8B-GCNI-V3-Q4_K_S.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama3-CAS-Audit8B-GCNI-V3-Q4_K_M.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama3-CAS-Audit8B-GCNI-V3-Q5_0.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama3-CAS-Audit8B-GCNI-V3-Q5_K_S.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama3-CAS-Audit8B-GCNI-V3-Q5_K_M.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama3-CAS-Audit8B-GCNI-V3-Q6_K.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama3-CAS-Audit8B-GCNI-V3-Q8_0.gguf](https://huggingface.co/tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF/blob/main/Llama3-CAS-Audit8B-GCNI-V3-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF --include "Llama3-CAS-Audit8B-GCNI-V3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/jamesohe_Llama3-CAS-Audit8B-GCNI-V3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/netcat420_MFANNv0.16.10-GGUF | tensorblock | 2025-06-19T01:51:03Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:netcat420/MFANNv0.16.10",
"base_model:quantized:netcat420/MFANNv0.16.10",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T08:42:56Z | ---
base_model: netcat420/MFANNv0.16.10
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## netcat420/MFANNv0.16.10 - GGUF
This repo contains GGUF format model files for [netcat420/MFANNv0.16.10](https://huggingface.co/netcat420/MFANNv0.16.10).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MFANNv0.16.10-Q2_K.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [MFANNv0.16.10-Q3_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [MFANNv0.16.10-Q3_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [MFANNv0.16.10-Q3_K_L.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [MFANNv0.16.10-Q4_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MFANNv0.16.10-Q4_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [MFANNv0.16.10-Q4_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [MFANNv0.16.10-Q5_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MFANNv0.16.10-Q5_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [MFANNv0.16.10-Q5_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [MFANNv0.16.10-Q6_K.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [MFANNv0.16.10-Q8_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.16.10-GGUF/blob/main/MFANNv0.16.10-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/netcat420_MFANNv0.16.10-GGUF --include "MFANNv0.16.10-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/netcat420_MFANNv0.16.10-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/FPHam_L3-8B-Everything-COT-GGUF | tensorblock | 2025-06-19T01:51:00Z | 23 | 0 | null | [
"gguf",
"llm",
"llama",
"llama3",
"TensorBlock",
"GGUF",
"base_model:FPHam/L3-8B-Everything-COT",
"base_model:quantized:FPHam/L3-8B-Everything-COT",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T08:18:10Z | ---
tags:
- llm
- llama
- llama3
- TensorBlock
- GGUF
base_model: FPHam/L3-8B-Everything-COT
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## FPHam/L3-8B-Everything-COT - GGUF
This repo contains GGUF format model files for [FPHam/L3-8B-Everything-COT](https://huggingface.co/FPHam/L3-8B-Everything-COT).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [L3-8B-Everything-COT-Q2_K.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [L3-8B-Everything-COT-Q3_K_S.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [L3-8B-Everything-COT-Q3_K_M.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [L3-8B-Everything-COT-Q3_K_L.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [L3-8B-Everything-COT-Q4_0.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [L3-8B-Everything-COT-Q4_K_S.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [L3-8B-Everything-COT-Q4_K_M.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [L3-8B-Everything-COT-Q5_0.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [L3-8B-Everything-COT-Q5_K_S.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [L3-8B-Everything-COT-Q5_K_M.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [L3-8B-Everything-COT-Q6_K.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [L3-8B-Everything-COT-Q8_0.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/FPHam_L3-8B-Everything-COT-GGUF --include "L3-8B-Everything-COT-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/FPHam_L3-8B-Everything-COT-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF | tensorblock | 2025-06-19T01:50:42Z | 98 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:allenai/dolma",
"dataset:allenai/tulu-v2-sft-mixture-olmo-4096",
"base_model:hamishivi/OLMo-1B-0724-SFT-hf",
"base_model:quantized:hamishivi/OLMo-1B-0724-SFT-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T07:03:50Z | ---
license: apache-2.0
datasets:
- allenai/dolma
- allenai/tulu-v2-sft-mixture-olmo-4096
language:
- en
tags:
- TensorBlock
- GGUF
base_model: hamishivi/OLMo-1B-0724-SFT-hf
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## hamishivi/OLMo-1B-0724-SFT-hf - GGUF
This repo contains GGUF format model files for [hamishivi/OLMo-1B-0724-SFT-hf](https://huggingface.co/hamishivi/OLMo-1B-0724-SFT-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|endoftext|><|user|>
{prompt}
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OLMo-1B-0724-SFT-hf-Q2_K.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q2_K.gguf) | Q2_K | 0.513 GB | smallest, significant quality loss - not recommended for most purposes |
| [OLMo-1B-0724-SFT-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q3_K_S.gguf) | Q3_K_S | 0.592 GB | very small, high quality loss |
| [OLMo-1B-0724-SFT-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q3_K_M.gguf) | Q3_K_M | 0.649 GB | very small, high quality loss |
| [OLMo-1B-0724-SFT-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q3_K_L.gguf) | Q3_K_L | 0.696 GB | small, substantial quality loss |
| [OLMo-1B-0724-SFT-hf-Q4_0.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q4_0.gguf) | Q4_0 | 0.748 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OLMo-1B-0724-SFT-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q4_K_S.gguf) | Q4_K_S | 0.755 GB | small, greater quality loss |
| [OLMo-1B-0724-SFT-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q4_K_M.gguf) | Q4_K_M | 0.791 GB | medium, balanced quality - recommended |
| [OLMo-1B-0724-SFT-hf-Q5_0.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q5_0.gguf) | Q5_0 | 0.895 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OLMo-1B-0724-SFT-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q5_K_S.gguf) | Q5_K_S | 0.895 GB | large, low quality loss - recommended |
| [OLMo-1B-0724-SFT-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q5_K_M.gguf) | Q5_K_M | 0.918 GB | large, very low quality loss - recommended |
| [OLMo-1B-0724-SFT-hf-Q6_K.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q6_K.gguf) | Q6_K | 1.052 GB | very large, extremely low quality loss |
| [OLMo-1B-0724-SFT-hf-Q8_0.gguf](https://huggingface.co/tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF/blob/main/OLMo-1B-0724-SFT-hf-Q8_0.gguf) | Q8_0 | 1.362 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF --include "OLMo-1B-0724-SFT-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/hamishivi_OLMo-1B-0724-SFT-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/netcat420_MFANNv0.13.10-GGUF | tensorblock | 2025-06-19T01:50:39Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:netcat420/MFANNv0.13.10",
"base_model:quantized:netcat420/MFANNv0.13.10",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T06:34:18Z | ---
base_model: netcat420/MFANNv0.13.10
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## netcat420/MFANNv0.13.10 - GGUF
This repo contains GGUF format model files for [netcat420/MFANNv0.13.10](https://huggingface.co/netcat420/MFANNv0.13.10).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MFANNv0.13.10-Q2_K.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [MFANNv0.13.10-Q3_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [MFANNv0.13.10-Q3_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [MFANNv0.13.10-Q3_K_L.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [MFANNv0.13.10-Q4_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MFANNv0.13.10-Q4_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [MFANNv0.13.10-Q4_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [MFANNv0.13.10-Q5_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MFANNv0.13.10-Q5_K_S.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [MFANNv0.13.10-Q5_K_M.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [MFANNv0.13.10-Q6_K.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [MFANNv0.13.10-Q8_0.gguf](https://huggingface.co/tensorblock/netcat420_MFANNv0.13.10-GGUF/blob/main/MFANNv0.13.10-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/netcat420_MFANNv0.13.10-GGUF --include "MFANNv0.13.10-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/netcat420_MFANNv0.13.10-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF | tensorblock | 2025-06-19T01:50:33Z | 151 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"mlx",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:mlx-community/Llama-3.2-1B-Instruct-bf16",
"base_model:quantized:mlx-community/Llama-3.2-1B-Instruct-bf16",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T05:17:08Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
- TensorBlock
- GGUF
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: mlx-community/Llama-3.2-1B-Instruct-bf16
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlx-community/Llama-3.2-1B-Instruct-bf16 - GGUF
This repo contains GGUF format model files for [mlx-community/Llama-3.2-1B-Instruct-bf16](https://huggingface.co/mlx-community/Llama-3.2-1B-Instruct-bf16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 28 Apr 2025
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3.2-1B-Instruct-bf16-Q2_K.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q2_K.gguf) | Q2_K | 0.581 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3.2-1B-Instruct-bf16-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q3_K_S.gguf) | Q3_K_S | 0.642 GB | very small, high quality loss |
| [Llama-3.2-1B-Instruct-bf16-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q3_K_M.gguf) | Q3_K_M | 0.691 GB | very small, high quality loss |
| [Llama-3.2-1B-Instruct-bf16-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q3_K_L.gguf) | Q3_K_L | 0.733 GB | small, substantial quality loss |
| [Llama-3.2-1B-Instruct-bf16-Q4_0.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q4_0.gguf) | Q4_0 | 0.771 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3.2-1B-Instruct-bf16-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q4_K_S.gguf) | Q4_K_S | 0.776 GB | small, greater quality loss |
| [Llama-3.2-1B-Instruct-bf16-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q4_K_M.gguf) | Q4_K_M | 0.808 GB | medium, balanced quality - recommended |
| [Llama-3.2-1B-Instruct-bf16-Q5_0.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q5_0.gguf) | Q5_0 | 0.893 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3.2-1B-Instruct-bf16-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q5_K_S.gguf) | Q5_K_S | 0.893 GB | large, low quality loss - recommended |
| [Llama-3.2-1B-Instruct-bf16-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q5_K_M.gguf) | Q5_K_M | 0.912 GB | large, very low quality loss - recommended |
| [Llama-3.2-1B-Instruct-bf16-Q6_K.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q6_K.gguf) | Q6_K | 1.022 GB | very large, extremely low quality loss |
| [Llama-3.2-1B-Instruct-bf16-Q8_0.gguf](https://huggingface.co/tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF/blob/main/Llama-3.2-1B-Instruct-bf16-Q8_0.gguf) | Q8_0 | 1.321 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF --include "Llama-3.2-1B-Instruct-bf16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlx-community_Llama-3.2-1B-Instruct-bf16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF | tensorblock | 2025-06-19T01:50:07Z | 75 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:aws-neuron/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23",
"base_model:quantized:aws-neuron/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T00:46:43Z | ---
base_model: aws-neuron/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## aws-neuron/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23 - GGUF
This repo contains GGUF format model files for [aws-neuron/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23](https://huggingface.co/aws-neuron/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q2_K.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q2_K.gguf) | Q2_K | 0.001 GB | smallest, significant quality loss - not recommended for most purposes |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q3_K_S.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q3_K_S.gguf) | Q3_K_S | 0.001 GB | very small, high quality loss |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q3_K_M.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q3_K_M.gguf) | Q3_K_M | 0.001 GB | very small, high quality loss |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q3_K_L.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q3_K_L.gguf) | Q3_K_L | 0.001 GB | small, substantial quality loss |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q4_0.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q4_0.gguf) | Q4_0 | 0.001 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q4_K_S.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q4_K_S.gguf) | Q4_K_S | 0.001 GB | small, greater quality loss |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q4_K_M.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q4_K_M.gguf) | Q4_K_M | 0.001 GB | medium, balanced quality - recommended |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q5_0.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q5_0.gguf) | Q5_0 | 0.001 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q5_K_S.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q5_K_S.gguf) | Q5_K_S | 0.001 GB | large, low quality loss - recommended |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q5_K_M.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q5_K_M.gguf) | Q5_K_M | 0.001 GB | large, very low quality loss - recommended |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q6_K.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q6_K.gguf) | Q6_K | 0.001 GB | very large, extremely low quality loss |
| [mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q8_0.gguf](https://huggingface.co/tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF/blob/main/mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q8_0.gguf) | Q8_0 | 0.001 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF --include "mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/aws-neuron_mixtral-instruct-seqlen-4096-bs-4-optimum-0-0-23-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF | tensorblock | 2025-06-19T01:49:52Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b",
"base_model:quantized:mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T20:54:54Z | ---
library_name: transformers
license: other
base_model: mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: mlfoundations-dev_code-stratos-unverified-scaled-0.25_stratos_7b
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b](https://huggingface.co/mlfoundations-dev/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q2_K.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q6_K.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q8_0.gguf](https://huggingface.co/tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF/blob/main/mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF --include "mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlfoundations-dev_mlfoundations-dev_code-stratos-unverified-scaled-0_25_stratos_7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF | tensorblock | 2025-06-19T01:49:30Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:kayfour/T3Q-gemma2-9B-it-Ko-safe",
"base_model:quantized:kayfour/T3Q-gemma2-9B-it-Ko-safe",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T16:36:41Z | ---
library_name: transformers
license: gemma
tags:
- TensorBlock
- GGUF
base_model: kayfour/T3Q-gemma2-9B-it-Ko-safe
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## kayfour/T3Q-gemma2-9B-it-Ko-safe - GGUF
This repo contains GGUF format model files for [kayfour/T3Q-gemma2-9B-it-Ko-safe](https://huggingface.co/kayfour/T3Q-gemma2-9B-it-Ko-safe).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [T3Q-gemma2-9B-it-Ko-safe-Q2_K.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q2_K.gguf) | Q2_K | 3.805 GB | smallest, significant quality loss - not recommended for most purposes |
| [T3Q-gemma2-9B-it-Ko-safe-Q3_K_S.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q3_K_S.gguf) | Q3_K_S | 4.338 GB | very small, high quality loss |
| [T3Q-gemma2-9B-it-Ko-safe-Q3_K_M.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q3_K_M.gguf) | Q3_K_M | 4.762 GB | very small, high quality loss |
| [T3Q-gemma2-9B-it-Ko-safe-Q3_K_L.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q3_K_L.gguf) | Q3_K_L | 5.132 GB | small, substantial quality loss |
| [T3Q-gemma2-9B-it-Ko-safe-Q4_0.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q4_0.gguf) | Q4_0 | 5.443 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [T3Q-gemma2-9B-it-Ko-safe-Q4_K_S.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q4_K_S.gguf) | Q4_K_S | 5.479 GB | small, greater quality loss |
| [T3Q-gemma2-9B-it-Ko-safe-Q4_K_M.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q4_K_M.gguf) | Q4_K_M | 5.761 GB | medium, balanced quality - recommended |
| [T3Q-gemma2-9B-it-Ko-safe-Q5_0.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q5_0.gguf) | Q5_0 | 6.484 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [T3Q-gemma2-9B-it-Ko-safe-Q5_K_S.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q5_K_S.gguf) | Q5_K_S | 6.484 GB | large, low quality loss - recommended |
| [T3Q-gemma2-9B-it-Ko-safe-Q5_K_M.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q5_K_M.gguf) | Q5_K_M | 6.647 GB | large, very low quality loss - recommended |
| [T3Q-gemma2-9B-it-Ko-safe-Q6_K.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q6_K.gguf) | Q6_K | 7.589 GB | very large, extremely low quality loss |
| [T3Q-gemma2-9B-it-Ko-safe-Q8_0.gguf](https://huggingface.co/tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF/blob/main/T3Q-gemma2-9B-it-Ko-safe-Q8_0.gguf) | Q8_0 | 9.827 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF --include "T3Q-gemma2-9B-it-Ko-safe-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/kayfour_T3Q-gemma2-9B-it-Ko-safe-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF | tensorblock | 2025-06-19T01:49:19Z | 124 | 0 | null | [
"gguf",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"en",
"dataset:princeton-nlp/llama3-ultrafeedback-armorm",
"base_model:Magpie-Align/Llama-3.1-8B-Magpie-Align-v0.1",
"base_model:quantized:Magpie-Align/Llama-3.1-8B-Magpie-Align-v0.1",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T14:42:11Z | ---
base_model: Magpie-Align/Llama-3.1-8B-Magpie-Align-v0.1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- TensorBlock
- GGUF
datasets:
- princeton-nlp/llama3-ultrafeedback-armorm
license: llama3.1
language:
- en
model-index:
- name: Llama-3.1-8B-Magpie-Align-v0.1
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Magpie-Align/Llama-3.1-8B-Magpie-Align-v0.1 - GGUF
This repo contains GGUF format model files for [Magpie-Align/Llama-3.1-8B-Magpie-Align-v0.1](https://huggingface.co/Magpie-Align/Llama-3.1-8B-Magpie-Align-v0.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q2_K.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q4_0.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q5_0.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q6_K.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3.1-8B-Magpie-Align-v0.1-Q8_0.gguf](https://huggingface.co/tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF/blob/main/Llama-3.1-8B-Magpie-Align-v0.1-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF --include "Llama-3.1-8B-Magpie-Align-v0.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Magpie-Align_Llama-3.1-8B-Magpie-Align-v0.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF | tensorblock | 2025-06-19T01:49:14Z | 55 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Minitron-4B-Depth-Base",
"base_model:quantized:nvidia/Llama-3.1-Minitron-4B-Depth-Base",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T14:15:48Z | ---
license: other
license_name: nvidia-open-model-license
license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
library_name: transformers
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
- TensorBlock
- GGUF
base_model: nvidia/Llama-3.1-Minitron-4B-Depth-Base
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## nvidia/Llama-3.1-Minitron-4B-Depth-Base - GGUF
This repo contains GGUF format model files for [nvidia/Llama-3.1-Minitron-4B-Depth-Base](https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Depth-Base).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3.1-Minitron-4B-Depth-Base-Q2_K.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q2_K.gguf) | Q2_K | 1.895 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3.1-Minitron-4B-Depth-Base-Q3_K_S.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q3_K_S.gguf) | Q3_K_S | 2.165 GB | very small, high quality loss |
| [Llama-3.1-Minitron-4B-Depth-Base-Q3_K_M.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q3_K_M.gguf) | Q3_K_M | 2.342 GB | very small, high quality loss |
| [Llama-3.1-Minitron-4B-Depth-Base-Q3_K_L.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q3_K_L.gguf) | Q3_K_L | 2.493 GB | small, substantial quality loss |
| [Llama-3.1-Minitron-4B-Depth-Base-Q4_0.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q4_0.gguf) | Q4_0 | 2.698 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3.1-Minitron-4B-Depth-Base-Q4_K_S.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q4_K_S.gguf) | Q4_K_S | 2.715 GB | small, greater quality loss |
| [Llama-3.1-Minitron-4B-Depth-Base-Q4_K_M.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q4_K_M.gguf) | Q4_K_M | 2.828 GB | medium, balanced quality - recommended |
| [Llama-3.1-Minitron-4B-Depth-Base-Q5_0.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q5_0.gguf) | Q5_0 | 3.200 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3.1-Minitron-4B-Depth-Base-Q5_K_S.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q5_K_S.gguf) | Q5_K_S | 3.200 GB | large, low quality loss - recommended |
| [Llama-3.1-Minitron-4B-Depth-Base-Q5_K_M.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q5_K_M.gguf) | Q5_K_M | 3.266 GB | large, very low quality loss - recommended |
| [Llama-3.1-Minitron-4B-Depth-Base-Q6_K.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q6_K.gguf) | Q6_K | 3.733 GB | very large, extremely low quality loss |
| [Llama-3.1-Minitron-4B-Depth-Base-Q8_0.gguf](https://huggingface.co/tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF/blob/main/Llama-3.1-Minitron-4B-Depth-Base-Q8_0.gguf) | Q8_0 | 4.832 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF --include "Llama-3.1-Minitron-4B-Depth-Base-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/nvidia_Llama-3.1-Minitron-4B-Depth-Base-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF | tensorblock | 2025-06-19T01:48:47Z | 52 | 0 | transformers | [
"transformers",
"gguf",
"Web3",
"Domain-Specific",
"NLP",
"Intent Recognition",
"Solidity",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:brianknowsai/Brian-Llama-3.2-3B",
"base_model:quantized:brianknowsai/Brian-Llama-3.2-3B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T09:43:08Z | ---
license: llama3.2
metrics:
- perplexity
base_model: brianknowsai/Brian-Llama-3.2-3B
pipeline_tag: text-generation
library_name: transformers
tags:
- Web3
- Domain-Specific
- NLP
- Intent Recognition
- Solidity
- TensorBlock
- GGUF
language:
- en
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## brianknowsai/Brian-Llama-3.2-3B - GGUF
This repo contains GGUF format model files for [brianknowsai/Brian-Llama-3.2-3B](https://huggingface.co/brianknowsai/Brian-Llama-3.2-3B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Brian-Llama-3.2-3B-Q2_K.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q2_K.gguf) | Q2_K | 1.364 GB | smallest, significant quality loss - not recommended for most purposes |
| [Brian-Llama-3.2-3B-Q3_K_S.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q3_K_S.gguf) | Q3_K_S | 1.543 GB | very small, high quality loss |
| [Brian-Llama-3.2-3B-Q3_K_M.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q3_K_M.gguf) | Q3_K_M | 1.687 GB | very small, high quality loss |
| [Brian-Llama-3.2-3B-Q3_K_L.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q3_K_L.gguf) | Q3_K_L | 1.815 GB | small, substantial quality loss |
| [Brian-Llama-3.2-3B-Q4_0.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q4_0.gguf) | Q4_0 | 1.917 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Brian-Llama-3.2-3B-Q4_K_S.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q4_K_S.gguf) | Q4_K_S | 1.928 GB | small, greater quality loss |
| [Brian-Llama-3.2-3B-Q4_K_M.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q4_K_M.gguf) | Q4_K_M | 2.019 GB | medium, balanced quality - recommended |
| [Brian-Llama-3.2-3B-Q5_0.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q5_0.gguf) | Q5_0 | 2.270 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Brian-Llama-3.2-3B-Q5_K_S.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q5_K_S.gguf) | Q5_K_S | 2.270 GB | large, low quality loss - recommended |
| [Brian-Llama-3.2-3B-Q5_K_M.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q5_K_M.gguf) | Q5_K_M | 2.322 GB | large, very low quality loss - recommended |
| [Brian-Llama-3.2-3B-Q6_K.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q6_K.gguf) | Q6_K | 2.644 GB | very large, extremely low quality loss |
| [Brian-Llama-3.2-3B-Q8_0.gguf](https://huggingface.co/tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF/blob/main/Brian-Llama-3.2-3B-Q8_0.gguf) | Q8_0 | 3.422 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF --include "Brian-Llama-3.2-3B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/brianknowsai_Brian-Llama-3.2-3B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF | tensorblock | 2025-06-19T01:48:36Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1",
"base_model:quantized:AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T07:52:52Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1 - GGUF
This repo contains GGUF format model files for [AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1](https://huggingface.co/AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q2_K.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q4_0.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q5_0.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q6_K.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [ktdsbaseLM-v0.16-onbased-llama3.1-Q8_0.gguf](https://huggingface.co/tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/blob/main/ktdsbaseLM-v0.16-onbased-llama3.1-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF --include "ktdsbaseLM-v0.16-onbased-llama3.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/AIDX-ktds_ktdsbaseLM-v0.16-onbased-llama3.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF | tensorblock | 2025-06-19T01:47:56Z | 54 | 0 | transformers | [
"transformers",
"gguf",
"language",
"granite-3.3",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"region:us",
"conversational"
] | text-generation | 2025-04-26T23:27:14Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.3
- TensorBlock
- GGUF
base_model: ibm-granite/granite-3.3-2b-instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ibm-granite/granite-3.3-2b-instruct - GGUF
This repo contains GGUF format model files for [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|start_of_role|>system<|end_of_role|>{system_prompt}<|end_of_text|>
<|start_of_role|>user<|end_of_role|>{prompt}<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [granite-3.3-2b-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q2_K.gguf) | Q2_K | 0.978 GB | smallest, significant quality loss - not recommended for most purposes |
| [granite-3.3-2b-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q3_K_S.gguf) | Q3_K_S | 1.130 GB | very small, high quality loss |
| [granite-3.3-2b-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q3_K_M.gguf) | Q3_K_M | 1.252 GB | very small, high quality loss |
| [granite-3.3-2b-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q3_K_L.gguf) | Q3_K_L | 1.357 GB | small, substantial quality loss |
| [granite-3.3-2b-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q4_0.gguf) | Q4_0 | 1.453 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [granite-3.3-2b-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q4_K_S.gguf) | Q4_K_S | 1.464 GB | small, greater quality loss |
| [granite-3.3-2b-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q4_K_M.gguf) | Q4_K_M | 1.545 GB | medium, balanced quality - recommended |
| [granite-3.3-2b-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q5_0.gguf) | Q5_0 | 1.757 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [granite-3.3-2b-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q5_K_S.gguf) | Q5_K_S | 1.757 GB | large, low quality loss - recommended |
| [granite-3.3-2b-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q5_K_M.gguf) | Q5_K_M | 1.805 GB | large, very low quality loss - recommended |
| [granite-3.3-2b-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q6_K.gguf) | Q6_K | 2.081 GB | very large, extremely low quality loss |
| [granite-3.3-2b-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF/blob/main/granite-3.3-2b-instruct-Q8_0.gguf) | Q8_0 | 2.694 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF --include "granite-3.3-2b-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ibm-granite_granite-3.3-2b-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Henk717_spring-dragon-GGUF | tensorblock | 2025-06-19T01:47:13Z | 46 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Henk717/spring-dragon",
"base_model:quantized:Henk717/spring-dragon",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T19:28:28Z | ---
license: llama2
tags:
- TensorBlock
- GGUF
base_model: Henk717/spring-dragon
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Henk717/spring-dragon - GGUF
This repo contains GGUF format model files for [Henk717/spring-dragon](https://huggingface.co/Henk717/spring-dragon).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [spring-dragon-Q2_K.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [spring-dragon-Q3_K_S.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [spring-dragon-Q3_K_M.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [spring-dragon-Q3_K_L.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [spring-dragon-Q4_0.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [spring-dragon-Q4_K_S.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [spring-dragon-Q4_K_M.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [spring-dragon-Q5_0.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [spring-dragon-Q5_K_S.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [spring-dragon-Q5_K_M.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [spring-dragon-Q6_K.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [spring-dragon-Q8_0.gguf](https://huggingface.co/tensorblock/Henk717_spring-dragon-GGUF/blob/main/spring-dragon-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Henk717_spring-dragon-GGUF --include "spring-dragon-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Henk717_spring-dragon-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF | tensorblock | 2025-06-19T01:45:20Z | 32 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"TensorBlock",
"GGUF",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:InfiniAILab/OpenR1-Qwen-7B-SFT-Instruct",
"base_model:quantized:InfiniAILab/OpenR1-Qwen-7B-SFT-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T02:53:01Z | ---
base_model: InfiniAILab/OpenR1-Qwen-7B-SFT-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: OpenR1-Qwen-7B-SFT-Instruct
tags:
- generated_from_trainer
- open-r1
- trl
- sft
- TensorBlock
- GGUF
licence: license
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## InfiniAILab/OpenR1-Qwen-7B-SFT-Instruct - GGUF
This repo contains GGUF format model files for [InfiniAILab/OpenR1-Qwen-7B-SFT-Instruct](https://huggingface.co/InfiniAILab/OpenR1-Qwen-7B-SFT-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OpenR1-Qwen-7B-SFT-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [OpenR1-Qwen-7B-SFT-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [OpenR1-Qwen-7B-SFT-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [OpenR1-Qwen-7B-SFT-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [OpenR1-Qwen-7B-SFT-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OpenR1-Qwen-7B-SFT-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [OpenR1-Qwen-7B-SFT-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [OpenR1-Qwen-7B-SFT-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OpenR1-Qwen-7B-SFT-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [OpenR1-Qwen-7B-SFT-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [OpenR1-Qwen-7B-SFT-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [OpenR1-Qwen-7B-SFT-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF/blob/main/OpenR1-Qwen-7B-SFT-Instruct-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF --include "OpenR1-Qwen-7B-SFT-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/InfiniAILab_OpenR1-Qwen-7B-SFT-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF | tensorblock | 2025-06-19T01:45:18Z | 86 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"TensorBlock",
"GGUF",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"base_model:Aspik101/vicuna-13b-v1.5-PL-lora_unload",
"base_model:quantized:Aspik101/vicuna-13b-v1.5-PL-lora_unload",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T01:58:03Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- TensorBlock
- GGUF
base_model: Aspik101/vicuna-13b-v1.5-PL-lora_unload
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Aspik101/vicuna-13b-v1.5-PL-lora_unload - GGUF
This repo contains GGUF format model files for [Aspik101/vicuna-13b-v1.5-PL-lora_unload](https://huggingface.co/Aspik101/vicuna-13b-v1.5-PL-lora_unload).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [vicuna-13b-v1.5-PL-lora_unload-Q2_K.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q2_K.gguf) | Q2_K | 4.854 GB | smallest, significant quality loss - not recommended for most purposes |
| [vicuna-13b-v1.5-PL-lora_unload-Q3_K_S.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q3_K_S.gguf) | Q3_K_S | 5.659 GB | very small, high quality loss |
| [vicuna-13b-v1.5-PL-lora_unload-Q3_K_M.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q3_K_M.gguf) | Q3_K_M | 6.338 GB | very small, high quality loss |
| [vicuna-13b-v1.5-PL-lora_unload-Q3_K_L.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q3_K_L.gguf) | Q3_K_L | 6.930 GB | small, substantial quality loss |
| [vicuna-13b-v1.5-PL-lora_unload-Q4_0.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q4_0.gguf) | Q4_0 | 7.366 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vicuna-13b-v1.5-PL-lora_unload-Q4_K_S.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q4_K_S.gguf) | Q4_K_S | 7.423 GB | small, greater quality loss |
| [vicuna-13b-v1.5-PL-lora_unload-Q4_K_M.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q4_K_M.gguf) | Q4_K_M | 7.866 GB | medium, balanced quality - recommended |
| [vicuna-13b-v1.5-PL-lora_unload-Q5_0.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q5_0.gguf) | Q5_0 | 8.972 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vicuna-13b-v1.5-PL-lora_unload-Q5_K_S.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q5_K_S.gguf) | Q5_K_S | 8.972 GB | large, low quality loss - recommended |
| [vicuna-13b-v1.5-PL-lora_unload-Q5_K_M.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q5_K_M.gguf) | Q5_K_M | 9.230 GB | large, very low quality loss - recommended |
| [vicuna-13b-v1.5-PL-lora_unload-Q6_K.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q6_K.gguf) | Q6_K | 10.679 GB | very large, extremely low quality loss |
| [vicuna-13b-v1.5-PL-lora_unload-Q8_0.gguf](https://huggingface.co/tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF/blob/main/vicuna-13b-v1.5-PL-lora_unload-Q8_0.gguf) | Q8_0 | 13.831 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF --include "vicuna-13b-v1.5-PL-lora_unload-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Aspik101_vicuna-13b-v1.5-PL-lora_unload-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF | tensorblock | 2025-06-19T01:42:17Z | 1 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran2k",
"base_model:quantized:MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran2k",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T15:46:30Z | ---
base_model: MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran2k
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran2k - GGUF
This repo contains GGUF format model files for [MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran2k](https://huggingface.co/MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran2k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q2_K.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q3_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q3_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q3_K_L.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q4_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q4_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q4_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q5_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q5_K_S.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q5_K_M.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q6_K.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-SlimOrca-OP-U2048-ran2k-Q8_0.gguf](https://huggingface.co/tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF/blob/main/Mistral-7B-SlimOrca-OP-U2048-ran2k-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF --include "Mistral-7B-SlimOrca-OP-U2048-ran2k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MNCKim_Mistral-7B-SlimOrca-OP-U2048-ran2k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF | tensorblock | 2025-06-19T01:42:07Z | 85 | 0 | transformers | [
"transformers",
"gguf",
"conversational",
"TensorBlock",
"GGUF",
"text-generation",
"ar",
"ary",
"dataset:MBZUAI-Paris/Darija-SFT-Mixture",
"base_model:MBZUAI-Paris/Atlas-Chat-9B",
"base_model:quantized:MBZUAI-Paris/Atlas-Chat-9B",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T13:37:05Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_button_content: Acknowledge license
tags:
- conversational
- TensorBlock
- GGUF
language:
- ar
- ary
datasets:
- MBZUAI-Paris/Darija-SFT-Mixture
base_model: MBZUAI-Paris/Atlas-Chat-9B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MBZUAI-Paris/Atlas-Chat-9B - GGUF
This repo contains GGUF format model files for [MBZUAI-Paris/Atlas-Chat-9B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Atlas-Chat-9B-Q2_K.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q2_K.gguf) | Q2_K | 3.805 GB | smallest, significant quality loss - not recommended for most purposes |
| [Atlas-Chat-9B-Q3_K_S.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q3_K_S.gguf) | Q3_K_S | 4.338 GB | very small, high quality loss |
| [Atlas-Chat-9B-Q3_K_M.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q3_K_M.gguf) | Q3_K_M | 4.762 GB | very small, high quality loss |
| [Atlas-Chat-9B-Q3_K_L.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q3_K_L.gguf) | Q3_K_L | 5.132 GB | small, substantial quality loss |
| [Atlas-Chat-9B-Q4_0.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q4_0.gguf) | Q4_0 | 5.443 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Atlas-Chat-9B-Q4_K_S.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q4_K_S.gguf) | Q4_K_S | 5.479 GB | small, greater quality loss |
| [Atlas-Chat-9B-Q4_K_M.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q4_K_M.gguf) | Q4_K_M | 5.761 GB | medium, balanced quality - recommended |
| [Atlas-Chat-9B-Q5_0.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q5_0.gguf) | Q5_0 | 6.484 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Atlas-Chat-9B-Q5_K_S.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q5_K_S.gguf) | Q5_K_S | 6.484 GB | large, low quality loss - recommended |
| [Atlas-Chat-9B-Q5_K_M.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q5_K_M.gguf) | Q5_K_M | 6.647 GB | large, very low quality loss - recommended |
| [Atlas-Chat-9B-Q6_K.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q6_K.gguf) | Q6_K | 7.589 GB | very large, extremely low quality loss |
| [Atlas-Chat-9B-Q8_0.gguf](https://huggingface.co/tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF/blob/main/Atlas-Chat-9B-Q8_0.gguf) | Q8_0 | 9.827 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF --include "Atlas-Chat-9B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MBZUAI-Paris_Atlas-Chat-9B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF | tensorblock | 2025-06-19T01:41:29Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:riddickz/Llama-3.1-8B-Instruct_kg3.5k_2e5",
"base_model:quantized:riddickz/Llama-3.1-8B-Instruct_kg3.5k_2e5",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T01:33:31Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: riddickz/Llama-3.1-8B-Instruct_kg3.5k_2e5
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## riddickz/Llama-3.1-8B-Instruct_kg3.5k_2e5 - GGUF
This repo contains GGUF format model files for [riddickz/Llama-3.1-8B-Instruct_kg3.5k_2e5](https://huggingface.co/riddickz/Llama-3.1-8B-Instruct_kg3.5k_2e5).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q2_K.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q3_K_S.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q3_K_M.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q3_K_L.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q4_0.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q4_K_S.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q4_K_M.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q5_0.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q5_K_S.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q5_K_M.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q6_K.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3.1-8B-Instruct_kg3.5k_2e5-Q8_0.gguf](https://huggingface.co/tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF/blob/main/Llama-3.1-8B-Instruct_kg3.5k_2e5-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF --include "Llama-3.1-8B-Instruct_kg3.5k_2e5-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/riddickz_Llama-3.1-8B-Instruct_kg3.5k_2e5-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF | tensorblock | 2025-06-19T01:41:16Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"sft",
"TensorBlock",
"GGUF",
"base_model:omrisap/Qwen2.5-1.5B_30K_COT_SFT",
"base_model:quantized:omrisap/Qwen2.5-1.5B_30K_COT_SFT",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-22T22:54:20Z | ---
library_name: transformers
tags:
- trl
- sft
- TensorBlock
- GGUF
base_model: omrisap/Qwen2.5-1.5B_30K_COT_SFT
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## omrisap/Qwen2.5-1.5B_30K_COT_SFT - GGUF
This repo contains GGUF format model files for [omrisap/Qwen2.5-1.5B_30K_COT_SFT](https://huggingface.co/omrisap/Qwen2.5-1.5B_30K_COT_SFT).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2.5-1.5B_30K_COT_SFT-Q2_K.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q2_K.gguf) | Q2_K | 0.676 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen2.5-1.5B_30K_COT_SFT-Q3_K_S.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q3_K_S.gguf) | Q3_K_S | 0.761 GB | very small, high quality loss |
| [Qwen2.5-1.5B_30K_COT_SFT-Q3_K_M.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q3_K_M.gguf) | Q3_K_M | 0.824 GB | very small, high quality loss |
| [Qwen2.5-1.5B_30K_COT_SFT-Q3_K_L.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q3_K_L.gguf) | Q3_K_L | 0.880 GB | small, substantial quality loss |
| [Qwen2.5-1.5B_30K_COT_SFT-Q4_0.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q4_0.gguf) | Q4_0 | 0.935 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen2.5-1.5B_30K_COT_SFT-Q4_K_S.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q4_K_S.gguf) | Q4_K_S | 0.940 GB | small, greater quality loss |
| [Qwen2.5-1.5B_30K_COT_SFT-Q4_K_M.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q4_K_M.gguf) | Q4_K_M | 0.986 GB | medium, balanced quality - recommended |
| [Qwen2.5-1.5B_30K_COT_SFT-Q5_0.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q5_0.gguf) | Q5_0 | 1.098 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen2.5-1.5B_30K_COT_SFT-Q5_K_S.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q5_K_S.gguf) | Q5_K_S | 1.098 GB | large, low quality loss - recommended |
| [Qwen2.5-1.5B_30K_COT_SFT-Q5_K_M.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q5_K_M.gguf) | Q5_K_M | 1.125 GB | large, very low quality loss - recommended |
| [Qwen2.5-1.5B_30K_COT_SFT-Q6_K.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q6_K.gguf) | Q6_K | 1.272 GB | very large, extremely low quality loss |
| [Qwen2.5-1.5B_30K_COT_SFT-Q8_0.gguf](https://huggingface.co/tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF/blob/main/Qwen2.5-1.5B_30K_COT_SFT-Q8_0.gguf) | Q8_0 | 1.646 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF --include "Qwen2.5-1.5B_30K_COT_SFT-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/omrisap_Qwen2.5-1.5B_30K_COT_SFT-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF | tensorblock | 2025-06-19T01:41:12Z | 62 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf",
"base_model:quantized:ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-22T22:50:16Z | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf - GGUF
This repo contains GGUF format model files for [ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf](https://huggingface.co/ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[SYSTEM_PROMPT]{system_prompt}[/SYSTEM_PROMPT][INST]{prompt}[/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q2_K.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q2_K.gguf) | Q2_K | 8.890 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q3_K_S.gguf) | Q3_K_S | 10.400 GB | very small, high quality loss |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q3_K_M.gguf) | Q3_K_M | 11.474 GB | very small, high quality loss |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q3_K_L.gguf) | Q3_K_L | 12.401 GB | small, substantial quality loss |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q4_0.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q4_0.gguf) | Q4_0 | 13.442 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q4_K_S.gguf) | Q4_K_S | 13.549 GB | small, greater quality loss |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q4_K_M.gguf) | Q4_K_M | 14.334 GB | medium, balanced quality - recommended |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q5_0.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q5_0.gguf) | Q5_0 | 16.304 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q5_K_S.gguf) | Q5_K_S | 16.304 GB | large, low quality loss - recommended |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q5_K_M.gguf) | Q5_K_M | 16.764 GB | large, very low quality loss - recommended |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q6_K.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q6_K.gguf) | Q6_K | 19.346 GB | very large, extremely low quality loss |
| [Mistral-Small-3.1-24B-Instruct-2503-hf-Q8_0.gguf](https://huggingface.co/tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF/blob/main/Mistral-Small-3.1-24B-Instruct-2503-hf-Q8_0.gguf) | Q8_0 | 25.055 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF --include "Mistral-Small-3.1-24B-Instruct-2503-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ZeroAgency_Mistral-Small-3.1-24B-Instruct-2503-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF | tensorblock | 2025-06-19T01:40:41Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:rahatneuron/shortgpt_llama3_5L_50",
"base_model:quantized:rahatneuron/shortgpt_llama3_5L_50",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T06:35:49Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: rahatneuron/shortgpt_llama3_5L_50
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## rahatneuron/shortgpt_llama3_5L_50 - GGUF
This repo contains GGUF format model files for [rahatneuron/shortgpt_llama3_5L_50](https://huggingface.co/rahatneuron/shortgpt_llama3_5L_50).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [shortgpt_llama3_5L_50-Q2_K.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q2_K.gguf) | Q2_K | 1.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [shortgpt_llama3_5L_50-Q3_K_S.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q3_K_S.gguf) | Q3_K_S | 1.326 GB | very small, high quality loss |
| [shortgpt_llama3_5L_50-Q3_K_M.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q3_K_M.gguf) | Q3_K_M | 1.446 GB | very small, high quality loss |
| [shortgpt_llama3_5L_50-Q3_K_L.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q3_K_L.gguf) | Q3_K_L | 1.550 GB | small, substantial quality loss |
| [shortgpt_llama3_5L_50-Q4_0.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q4_0.gguf) | Q4_0 | 1.634 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [shortgpt_llama3_5L_50-Q4_K_S.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q4_K_S.gguf) | Q4_K_S | 1.642 GB | small, greater quality loss |
| [shortgpt_llama3_5L_50-Q4_K_M.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q4_K_M.gguf) | Q4_K_M | 1.714 GB | medium, balanced quality - recommended |
| [shortgpt_llama3_5L_50-Q5_0.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q5_0.gguf) | Q5_0 | 1.923 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [shortgpt_llama3_5L_50-Q5_K_S.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q5_K_S.gguf) | Q5_K_S | 1.923 GB | large, low quality loss - recommended |
| [shortgpt_llama3_5L_50-Q5_K_M.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q5_K_M.gguf) | Q5_K_M | 1.965 GB | large, very low quality loss - recommended |
| [shortgpt_llama3_5L_50-Q6_K.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q6_K.gguf) | Q6_K | 2.231 GB | very large, extremely low quality loss |
| [shortgpt_llama3_5L_50-Q8_0.gguf](https://huggingface.co/tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF/blob/main/shortgpt_llama3_5L_50-Q8_0.gguf) | Q8_0 | 2.887 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF --include "shortgpt_llama3_5L_50-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/rahatneuron_shortgpt_llama3_5L_50-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OLMo-2-0325-32B-Instruct-GGUF | tensorblock | 2025-06-19T01:40:19Z | 151 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"dataset:allenai/RLVR-GSM-MATH-IF-Mixed-Constraints",
"base_model:allenai/OLMo-2-0325-32B-Instruct",
"base_model:quantized:allenai/OLMo-2-0325-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-23T19:00:31Z | ---
license: apache-2.0
language:
- en
datasets:
- allenai/RLVR-GSM-MATH-IF-Mixed-Constraints
base_model: allenai/OLMo-2-0325-32B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## allenai/OLMo-2-0325-32B-Instruct - GGUF
This repo contains GGUF format model files for [allenai/OLMo-2-0325-32B-Instruct](https://huggingface.co/allenai/OLMo-2-0325-32B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}
<|user|>
{prompt}
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OLMo-2-0325-32B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q2_K.gguf) | Q2_K | 12.006 GB | smallest, significant quality loss - not recommended for most purposes |
| [OLMo-2-0325-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 14.059 GB | very small, high quality loss |
| [OLMo-2-0325-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 15.601 GB | very small, high quality loss |
| [OLMo-2-0325-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 16.913 GB | small, substantial quality loss |
| [OLMo-2-0325-32B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q4_0.gguf) | Q4_0 | 18.271 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OLMo-2-0325-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 18.416 GB | small, greater quality loss |
| [OLMo-2-0325-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 19.483 GB | medium, balanced quality - recommended |
| [OLMo-2-0325-32B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q5_0.gguf) | Q5_0 | 22.236 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OLMo-2-0325-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 22.236 GB | large, low quality loss - recommended |
| [OLMo-2-0325-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 22.860 GB | large, very low quality loss - recommended |
| [OLMo-2-0325-32B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q6_K.gguf) | Q6_K | 26.449 GB | very large, extremely low quality loss |
| [OLMo-2-0325-32B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/OLMo-2-0325-32B-Instruct-GGUF/blob/main/OLMo-2-0325-32B-Instruct-Q8_0.gguf) | Q8_0 | 34.256 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OLMo-2-0325-32B-Instruct-GGUF --include "OLMo-2-0325-32B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OLMo-2-0325-32B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/mxbai-rerank-large-v2-GGUF | tensorblock | 2025-06-19T01:39:50Z | 108 | 1 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-ranking",
"en",
"zh",
"de",
"ja",
"ko",
"es",
"fr",
"ar",
"bn",
"ru",
"id",
"sw",
"te",
"th",
"base_model:mixedbread-ai/mxbai-rerank-large-v2",
"base_model:quantized:mixedbread-ai/mxbai-rerank-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-ranking | 2025-03-23T10:49:38Z | ---
library_name: transformers
license: apache-2.0
language:
- en
- zh
- de
- ja
- ko
- es
- fr
- ar
- bn
- ru
- id
- sw
- te
- th
base_model: mixedbread-ai/mxbai-rerank-large-v2
tags:
- TensorBlock
- GGUF
pipeline_tag: text-ranking
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mixedbread-ai/mxbai-rerank-large-v2 - GGUF
This repo contains GGUF format model files for [mixedbread-ai/mxbai-rerank-large-v2](https://huggingface.co/mixedbread-ai/mxbai-rerank-large-v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mxbai-rerank-large-v2-Q2_K.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q2_K.gguf) | Q2_K | 0.676 GB | smallest, significant quality loss - not recommended for most purposes |
| [mxbai-rerank-large-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q3_K_S.gguf) | Q3_K_S | 0.761 GB | very small, high quality loss |
| [mxbai-rerank-large-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q3_K_M.gguf) | Q3_K_M | 0.824 GB | very small, high quality loss |
| [mxbai-rerank-large-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q3_K_L.gguf) | Q3_K_L | 0.880 GB | small, substantial quality loss |
| [mxbai-rerank-large-v2-Q4_0.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q4_0.gguf) | Q4_0 | 0.935 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mxbai-rerank-large-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q4_K_S.gguf) | Q4_K_S | 0.940 GB | small, greater quality loss |
| [mxbai-rerank-large-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q4_K_M.gguf) | Q4_K_M | 0.986 GB | medium, balanced quality - recommended |
| [mxbai-rerank-large-v2-Q5_0.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q5_0.gguf) | Q5_0 | 1.099 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mxbai-rerank-large-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q5_K_S.gguf) | Q5_K_S | 1.099 GB | large, low quality loss - recommended |
| [mxbai-rerank-large-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q5_K_M.gguf) | Q5_K_M | 1.125 GB | large, very low quality loss - recommended |
| [mxbai-rerank-large-v2-Q6_K.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q6_K.gguf) | Q6_K | 1.273 GB | very large, extremely low quality loss |
| [mxbai-rerank-large-v2-Q8_0.gguf](https://huggingface.co/tensorblock/mxbai-rerank-large-v2-GGUF/blob/main/mxbai-rerank-large-v2-Q8_0.gguf) | Q8_0 | 1.647 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mxbai-rerank-large-v2-GGUF --include "mxbai-rerank-large-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mxbai-rerank-large-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/rho-1b-sft-MATH-chat-GGUF | tensorblock | 2025-06-19T01:39:35Z | 17 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:realtreetune/rho-1b-sft-MATH-chat",
"base_model:quantized:realtreetune/rho-1b-sft-MATH-chat",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-23T08:22:53Z | ---
base_model: realtreetune/rho-1b-sft-MATH-chat
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## realtreetune/rho-1b-sft-MATH-chat - GGUF
This repo contains GGUF format model files for [realtreetune/rho-1b-sft-MATH-chat](https://huggingface.co/realtreetune/rho-1b-sft-MATH-chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[MATH_TASK] Problem:
{prompt}
Solution:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [rho-1b-sft-MATH-chat-Q2_K.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q2_K.gguf) | Q2_K | 0.432 GB | smallest, significant quality loss - not recommended for most purposes |
| [rho-1b-sft-MATH-chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q3_K_S.gguf) | Q3_K_S | 0.499 GB | very small, high quality loss |
| [rho-1b-sft-MATH-chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q3_K_M.gguf) | Q3_K_M | 0.548 GB | very small, high quality loss |
| [rho-1b-sft-MATH-chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q3_K_L.gguf) | Q3_K_L | 0.592 GB | small, substantial quality loss |
| [rho-1b-sft-MATH-chat-Q4_0.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q4_0.gguf) | Q4_0 | 0.637 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [rho-1b-sft-MATH-chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q4_K_S.gguf) | Q4_K_S | 0.640 GB | small, greater quality loss |
| [rho-1b-sft-MATH-chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q4_K_M.gguf) | Q4_K_M | 0.668 GB | medium, balanced quality - recommended |
| [rho-1b-sft-MATH-chat-Q5_0.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q5_0.gguf) | Q5_0 | 0.766 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [rho-1b-sft-MATH-chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q5_K_S.gguf) | Q5_K_S | 0.766 GB | large, low quality loss - recommended |
| [rho-1b-sft-MATH-chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q5_K_M.gguf) | Q5_K_M | 0.782 GB | large, very low quality loss - recommended |
| [rho-1b-sft-MATH-chat-Q6_K.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q6_K.gguf) | Q6_K | 0.903 GB | very large, extremely low quality loss |
| [rho-1b-sft-MATH-chat-Q8_0.gguf](https://huggingface.co/tensorblock/rho-1b-sft-MATH-chat-GGUF/blob/main/rho-1b-sft-MATH-chat-Q8_0.gguf) | Q8_0 | 1.170 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/rho-1b-sft-MATH-chat-GGUF --include "rho-1b-sft-MATH-chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/rho-1b-sft-MATH-chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/granite-guardian-3.2-3b-a800m-GGUF | tensorblock | 2025-06-19T01:37:19Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:ibm-granite/granite-guardian-3.2-3b-a800m",
"base_model:quantized:ibm-granite/granite-guardian-3.2-3b-a800m",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-22T02:31:37Z | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: ibm-granite/granite-guardian-3.2-3b-a800m
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ibm-granite/granite-guardian-3.2-3b-a800m - GGUF
This repo contains GGUF format model files for [ibm-granite/granite-guardian-3.2-3b-a800m](https://huggingface.co/ibm-granite/granite-guardian-3.2-3b-a800m).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|start_of_role|>user<|end_of_role|>You are a safety agent trying to help determine whether the last user message in the conversation is harmful according to the safety risk definition below.
<start_of_conversation>
User Message: {prompt}
<end_of_conversation>
Our safety risk definition is defined below:
<start_of_risk_definition>
- The last 'User' message is considered universally harmful.
<end_of_risk_definition>
Is the last user message harmful based on the risk definition? Your answer must be either 'Yes' or 'No'.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [granite-guardian-3.2-3b-a800m-Q2_K.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q2_K.gguf) | Q2_K | 1.241 GB | smallest, significant quality loss - not recommended for most purposes |
| [granite-guardian-3.2-3b-a800m-Q3_K_S.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q3_K_S.gguf) | Q3_K_S | 1.456 GB | very small, high quality loss |
| [granite-guardian-3.2-3b-a800m-Q3_K_M.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q3_K_M.gguf) | Q3_K_M | 1.611 GB | very small, high quality loss |
| [granite-guardian-3.2-3b-a800m-Q3_K_L.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q3_K_L.gguf) | Q3_K_L | 1.742 GB | small, substantial quality loss |
| [granite-guardian-3.2-3b-a800m-Q4_0.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q4_0.gguf) | Q4_0 | 1.884 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [granite-guardian-3.2-3b-a800m-Q4_K_S.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q4_K_S.gguf) | Q4_K_S | 1.900 GB | small, greater quality loss |
| [granite-guardian-3.2-3b-a800m-Q4_K_M.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q4_K_M.gguf) | Q4_K_M | 2.017 GB | medium, balanced quality - recommended |
| [granite-guardian-3.2-3b-a800m-Q5_0.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q5_0.gguf) | Q5_0 | 2.287 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [granite-guardian-3.2-3b-a800m-Q5_K_S.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q5_K_S.gguf) | Q5_K_S | 2.287 GB | large, low quality loss - recommended |
| [granite-guardian-3.2-3b-a800m-Q5_K_M.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q5_K_M.gguf) | Q5_K_M | 2.355 GB | large, very low quality loss - recommended |
| [granite-guardian-3.2-3b-a800m-Q6_K.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q6_K.gguf) | Q6_K | 2.714 GB | very large, extremely low quality loss |
| [granite-guardian-3.2-3b-a800m-Q8_0.gguf](https://huggingface.co/tensorblock/granite-guardian-3.2-3b-a800m-GGUF/blob/main/granite-guardian-3.2-3b-a800m-Q8_0.gguf) | Q8_0 | 3.513 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/granite-guardian-3.2-3b-a800m-GGUF --include "granite-guardian-3.2-3b-a800m-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/granite-guardian-3.2-3b-a800m-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/german-r1-GGUF | tensorblock | 2025-06-19T01:37:05Z | 100 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"de",
"dataset:openGPT-X/gsm8kx",
"base_model:malteos/german-r1",
"base_model:quantized:malteos/german-r1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-22T00:03:20Z | ---
library_name: transformers
datasets:
- openGPT-X/gsm8kx
language:
- de
base_model: malteos/german-r1
pipeline_tag: text-generation
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## malteos/german-r1 - GGUF
This repo contains GGUF format model files for [malteos/german-r1](https://huggingface.co/malteos/german-r1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [german-r1-Q2_K.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q2_K.gguf) | Q2_K | 1.275 GB | smallest, significant quality loss - not recommended for most purposes |
| [german-r1-Q3_K_S.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q3_K_S.gguf) | Q3_K_S | 1.454 GB | very small, high quality loss |
| [german-r1-Q3_K_M.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q3_K_M.gguf) | Q3_K_M | 1.590 GB | very small, high quality loss |
| [german-r1-Q3_K_L.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q3_K_L.gguf) | Q3_K_L | 1.707 GB | small, substantial quality loss |
| [german-r1-Q4_0.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q4_0.gguf) | Q4_0 | 1.823 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [german-r1-Q4_K_S.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q4_K_S.gguf) | Q4_K_S | 1.834 GB | small, greater quality loss |
| [german-r1-Q4_K_M.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q4_K_M.gguf) | Q4_K_M | 1.930 GB | medium, balanced quality - recommended |
| [german-r1-Q5_0.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q5_0.gguf) | Q5_0 | 2.170 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [german-r1-Q5_K_S.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q5_K_S.gguf) | Q5_K_S | 2.170 GB | large, low quality loss - recommended |
| [german-r1-Q5_K_M.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q5_K_M.gguf) | Q5_K_M | 2.225 GB | large, very low quality loss - recommended |
| [german-r1-Q6_K.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q6_K.gguf) | Q6_K | 2.538 GB | very large, extremely low quality loss |
| [german-r1-Q8_0.gguf](https://huggingface.co/tensorblock/german-r1-GGUF/blob/main/german-r1-Q8_0.gguf) | Q8_0 | 3.285 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/german-r1-GGUF --include "german-r1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/german-r1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/uiop40-GGUF | tensorblock | 2025-06-19T01:37:01Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:juhw/uiop40",
"base_model:quantized:juhw/uiop40",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-21T23:17:50Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: juhw/uiop40
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## juhw/uiop40 - GGUF
This repo contains GGUF format model files for [juhw/uiop40](https://huggingface.co/juhw/uiop40).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [uiop40-Q2_K.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q2_K.gguf) | Q2_K | 3.090 GB | smallest, significant quality loss - not recommended for most purposes |
| [uiop40-Q3_K_S.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q3_K_S.gguf) | Q3_K_S | 3.551 GB | very small, high quality loss |
| [uiop40-Q3_K_M.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q3_K_M.gguf) | Q3_K_M | 3.880 GB | very small, high quality loss |
| [uiop40-Q3_K_L.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q3_K_L.gguf) | Q3_K_L | 4.172 GB | small, substantial quality loss |
| [uiop40-Q4_0.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q4_0.gguf) | Q4_0 | 4.497 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [uiop40-Q4_K_S.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q4_K_S.gguf) | Q4_K_S | 4.525 GB | small, greater quality loss |
| [uiop40-Q4_K_M.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q4_K_M.gguf) | Q4_K_M | 4.736 GB | medium, balanced quality - recommended |
| [uiop40-Q5_0.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q5_0.gguf) | Q5_0 | 5.388 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [uiop40-Q5_K_S.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q5_K_S.gguf) | Q5_K_S | 5.388 GB | large, low quality loss - recommended |
| [uiop40-Q5_K_M.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q5_K_M.gguf) | Q5_K_M | 5.511 GB | large, very low quality loss - recommended |
| [uiop40-Q6_K.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q6_K.gguf) | Q6_K | 6.334 GB | very large, extremely low quality loss |
| [uiop40-Q8_0.gguf](https://huggingface.co/tensorblock/uiop40-GGUF/blob/main/uiop40-Q8_0.gguf) | Q8_0 | 8.202 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/uiop40-GGUF --include "uiop40-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/uiop40-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF | tensorblock | 2025-06-19T01:36:45Z | 78 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview",
"base_model:quantized:Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-21T17:52:22Z | ---
base_model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
license: apache-2.0
model-index:
- name: Qwen2.5-Dyanka-7B-Preview
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 76.4
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 36.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 48.79
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.95
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.51
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.51
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Xiaojian9992024%2FQwen2.5-Dyanka-7B-Preview
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview - GGUF
This repo contains GGUF format model files for [Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2.5-Dyanka-7B-Preview-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen2.5-Dyanka-7B-Preview-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [Qwen2.5-Dyanka-7B-Preview-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [Qwen2.5-Dyanka-7B-Preview-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [Qwen2.5-Dyanka-7B-Preview-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen2.5-Dyanka-7B-Preview-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [Qwen2.5-Dyanka-7B-Preview-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [Qwen2.5-Dyanka-7B-Preview-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen2.5-Dyanka-7B-Preview-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [Qwen2.5-Dyanka-7B-Preview-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [Qwen2.5-Dyanka-7B-Preview-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [Qwen2.5-Dyanka-7B-Preview-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF/blob/main/Qwen2.5-Dyanka-7B-Preview-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF --include "Qwen2.5-Dyanka-7B-Preview-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Qwen2.5-Dyanka-7B-Preview-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/OpenR1-Qwen-7B-French-GGUF | tensorblock | 2025-06-19T01:36:16Z | 170 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"reasoning",
"thinking",
"deepseek",
"dolphin",
"qwen",
"TensorBlock",
"GGUF",
"fr",
"dataset:WiroAI/dolphin-r1-french",
"base_model:WiroAI/OpenR1-Qwen-7B-French",
"base_model:quantized:WiroAI/OpenR1-Qwen-7B-French",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-21T12:56:09Z | ---
datasets: WiroAI/dolphin-r1-french
library_name: transformers
model_name: OpenR1-Qwen-7B-French
tags:
- generated_from_trainer
- trl
- sft
- reasoning
- thinking
- deepseek
- dolphin
- qwen
- TensorBlock
- GGUF
licence: license
license: apache-2.0
language:
- fr
base_model: WiroAI/OpenR1-Qwen-7B-French
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## WiroAI/OpenR1-Qwen-7B-French - GGUF
This repo contains GGUF format model files for [WiroAI/OpenR1-Qwen-7B-French](https://huggingface.co/WiroAI/OpenR1-Qwen-7B-French).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OpenR1-Qwen-7B-French-Q2_K.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [OpenR1-Qwen-7B-French-Q3_K_S.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [OpenR1-Qwen-7B-French-Q3_K_M.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [OpenR1-Qwen-7B-French-Q3_K_L.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [OpenR1-Qwen-7B-French-Q4_0.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OpenR1-Qwen-7B-French-Q4_K_S.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [OpenR1-Qwen-7B-French-Q4_K_M.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [OpenR1-Qwen-7B-French-Q5_0.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OpenR1-Qwen-7B-French-Q5_K_S.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [OpenR1-Qwen-7B-French-Q5_K_M.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [OpenR1-Qwen-7B-French-Q6_K.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [OpenR1-Qwen-7B-French-Q8_0.gguf](https://huggingface.co/tensorblock/OpenR1-Qwen-7B-French-GGUF/blob/main/OpenR1-Qwen-7B-French-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OpenR1-Qwen-7B-French-GGUF --include "OpenR1-Qwen-7B-French-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OpenR1-Qwen-7B-French-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Smol-Hub-tldr-GGUF | tensorblock | 2025-06-19T01:35:28Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"TensorBlock",
"GGUF",
"dataset:davanstrien/hub-tldr-dataset-summaries-llama",
"dataset:davanstrien/hub-tldr-model-summaries-llama",
"base_model:davanstrien/Smol-Hub-tldr",
"base_model:quantized:davanstrien/Smol-Hub-tldr",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-21T02:37:47Z | ---
base_model: davanstrien/Smol-Hub-tldr
library_name: transformers
model_name: SmolLM2-360M-tldr-sft-2025-02-12_15-13
tags:
- generated_from_trainer
- trl
- sft
- TensorBlock
- GGUF
license: mit
datasets:
- davanstrien/hub-tldr-dataset-summaries-llama
- davanstrien/hub-tldr-model-summaries-llama
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## davanstrien/Smol-Hub-tldr - GGUF
This repo contains GGUF format model files for [davanstrien/Smol-Hub-tldr](https://huggingface.co/davanstrien/Smol-Hub-tldr).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<CARD>{prompt}</CARD><CARD_SUMMARY>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Smol-Hub-tldr-Q2_K.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q2_K.gguf) | Q2_K | 0.219 GB | smallest, significant quality loss - not recommended for most purposes |
| [Smol-Hub-tldr-Q3_K_S.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q3_K_S.gguf) | Q3_K_S | 0.219 GB | very small, high quality loss |
| [Smol-Hub-tldr-Q3_K_M.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q3_K_M.gguf) | Q3_K_M | 0.235 GB | very small, high quality loss |
| [Smol-Hub-tldr-Q3_K_L.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q3_K_L.gguf) | Q3_K_L | 0.246 GB | small, substantial quality loss |
| [Smol-Hub-tldr-Q4_0.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q4_0.gguf) | Q4_0 | 0.229 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Smol-Hub-tldr-Q4_K_S.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q4_K_S.gguf) | Q4_K_S | 0.260 GB | small, greater quality loss |
| [Smol-Hub-tldr-Q4_K_M.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q4_K_M.gguf) | Q4_K_M | 0.271 GB | medium, balanced quality - recommended |
| [Smol-Hub-tldr-Q5_0.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q5_0.gguf) | Q5_0 | 0.268 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Smol-Hub-tldr-Q5_K_S.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q5_K_S.gguf) | Q5_K_S | 0.283 GB | large, low quality loss - recommended |
| [Smol-Hub-tldr-Q5_K_M.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q5_K_M.gguf) | Q5_K_M | 0.290 GB | large, very low quality loss - recommended |
| [Smol-Hub-tldr-Q6_K.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q6_K.gguf) | Q6_K | 0.367 GB | very large, extremely low quality loss |
| [Smol-Hub-tldr-Q8_0.gguf](https://huggingface.co/tensorblock/Smol-Hub-tldr-GGUF/blob/main/Smol-Hub-tldr-Q8_0.gguf) | Q8_0 | 0.386 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Smol-Hub-tldr-GGUF --include "Smol-Hub-tldr-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Smol-Hub-tldr-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Calcium-Opus-14B-Elite-1M-GGUF | tensorblock | 2025-06-19T01:34:53Z | 177 | 0 | transformers | [
"transformers",
"gguf",
"opus",
"14b",
"CoCo",
"reasoning",
"cosine",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:prithivMLmods/Calcium-Opus-14B-Elite-1M",
"base_model:quantized:prithivMLmods/Calcium-Opus-14B-Elite-1M",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-09T12:06:13Z | ---
license: apache-2.0
language:
- en
base_model: prithivMLmods/Calcium-Opus-14B-Elite-1M
pipeline_tag: text-generation
library_name: transformers
tags:
- opus
- 14b
- CoCo
- reasoning
- cosine
- TensorBlock
- GGUF
model-index:
- name: Calcium-Opus-14B-Elite-1M
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 56.13
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Elite-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 46.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Elite-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 29.53
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Elite-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.65
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Elite-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 18.28
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Elite-1M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 46.13
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Elite-1M
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## prithivMLmods/Calcium-Opus-14B-Elite-1M - GGUF
This repo contains GGUF format model files for [prithivMLmods/Calcium-Opus-14B-Elite-1M](https://huggingface.co/prithivMLmods/Calcium-Opus-14B-Elite-1M).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Calcium-Opus-14B-Elite-1M-Q2_K.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q2_K.gguf) | Q2_K | 5.770 GB | smallest, significant quality loss - not recommended for most purposes |
| [Calcium-Opus-14B-Elite-1M-Q3_K_S.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q3_K_S.gguf) | Q3_K_S | 6.660 GB | very small, high quality loss |
| [Calcium-Opus-14B-Elite-1M-Q3_K_M.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q3_K_M.gguf) | Q3_K_M | 7.339 GB | very small, high quality loss |
| [Calcium-Opus-14B-Elite-1M-Q3_K_L.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q3_K_L.gguf) | Q3_K_L | 7.925 GB | small, substantial quality loss |
| [Calcium-Opus-14B-Elite-1M-Q4_0.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q4_0.gguf) | Q4_0 | 8.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Calcium-Opus-14B-Elite-1M-Q4_K_S.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q4_K_S.gguf) | Q4_K_S | 8.573 GB | small, greater quality loss |
| [Calcium-Opus-14B-Elite-1M-Q4_K_M.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q4_K_M.gguf) | Q4_K_M | 8.988 GB | medium, balanced quality - recommended |
| [Calcium-Opus-14B-Elite-1M-Q5_0.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q5_0.gguf) | Q5_0 | 10.267 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Calcium-Opus-14B-Elite-1M-Q5_K_S.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q5_K_S.gguf) | Q5_K_S | 10.267 GB | large, low quality loss - recommended |
| [Calcium-Opus-14B-Elite-1M-Q5_K_M.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q5_K_M.gguf) | Q5_K_M | 10.509 GB | large, very low quality loss - recommended |
| [Calcium-Opus-14B-Elite-1M-Q6_K.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q6_K.gguf) | Q6_K | 12.125 GB | very large, extremely low quality loss |
| [Calcium-Opus-14B-Elite-1M-Q8_0.gguf](https://huggingface.co/tensorblock/Calcium-Opus-14B-Elite-1M-GGUF/blob/main/Calcium-Opus-14B-Elite-1M-Q8_0.gguf) | Q8_0 | 15.702 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Calcium-Opus-14B-Elite-1M-GGUF --include "Calcium-Opus-14B-Elite-1M-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Calcium-Opus-14B-Elite-1M-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/llama3-1_8b_r1_annotated_aops-GGUF | tensorblock | 2025-06-19T01:33:06Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/llama3-1_8b_r1_annotated_aops",
"base_model:quantized:mlfoundations-dev/llama3-1_8b_r1_annotated_aops",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-07T11:24:01Z | ---
library_name: transformers
license: llama3.1
base_model: mlfoundations-dev/llama3-1_8b_r1_annotated_aops
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: llama3-1_8b_r1_annotated_aops
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlfoundations-dev/llama3-1_8b_r1_annotated_aops - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/llama3-1_8b_r1_annotated_aops](https://huggingface.co/mlfoundations-dev/llama3-1_8b_r1_annotated_aops).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama3-1_8b_r1_annotated_aops-Q2_K.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama3-1_8b_r1_annotated_aops-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [llama3-1_8b_r1_annotated_aops-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [llama3-1_8b_r1_annotated_aops-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [llama3-1_8b_r1_annotated_aops-Q4_0.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama3-1_8b_r1_annotated_aops-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [llama3-1_8b_r1_annotated_aops-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [llama3-1_8b_r1_annotated_aops-Q5_0.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama3-1_8b_r1_annotated_aops-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [llama3-1_8b_r1_annotated_aops-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [llama3-1_8b_r1_annotated_aops-Q6_K.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [llama3-1_8b_r1_annotated_aops-Q8_0.gguf](https://huggingface.co/tensorblock/llama3-1_8b_r1_annotated_aops-GGUF/blob/main/llama3-1_8b_r1_annotated_aops-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/llama3-1_8b_r1_annotated_aops-GGUF --include "llama3-1_8b_r1_annotated_aops-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/llama3-1_8b_r1_annotated_aops-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Subsets and Splits