Latxa 7b GGUF
Provided files
Name | Quant method | Bits | Size | Max RAM required | Use case |
---|---|---|---|---|---|
latxa-7b-v1.gguf | 26 GB | ||||
latxa-7b-v1-f16.gguf | 13 GB | ||||
latxa-7b-v1-q8_0.gguf | Q8_0 | 6,7 GB |
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for xezpeleta/latxa-7b-v1-gguf
Base model
HiTZ/latxa-7b-v1