GGUF files of princeton-nlp/gemma-2-9b-it-SimPO.
Files quantized with larger embed and output weights than normal GGUF setting
Q8_0 embed and output weights: Q6_K_L, Q5_K_L, Q4_K_L
bf16 embed and output weights (maybe slower inference): Q8_0_L, Q6_K_XL, Q5_K_XL, Q4_K_XL
- Downloads last month
- 64
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for pipihand01/gemma-2-9b-it-SimPO-GGUF
Base model
google/gemma-2-9b
Finetuned
google/gemma-2-9b-it
Finetuned
princeton-nlp/gemma-2-9b-it-SimPO