File size: 410 Bytes
36494b6 c1526b6 36494b6 c1526b6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
---
base_model: unsloth/llama-3.3-70b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Llama-3.3-70B-o1 GGUF Quants
This repository contains the GGUF quants for the [Llama-3.3-70B-o1](https://huggingface.co/codelion/Llama-3.3-70B-o1) model.
You can use them for inference in local inference servers like ollama or llama.cpp
|