Llama-3.1-8B Sonnet fine-tuning in quantized GGUFs

Using unsloth for fine-tuning:

==((====))==  Unsloth 2025.2.4: Fast Llama patching. Transformers: 4.48.2.
   \\   /|    GPU: NVIDIA A100-SXM4-40GB. Max memory: 39.557 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.5.1+cu124. CUDA: 8.0. CUDA Toolkit: 12.4. Triton: 3.1.0
\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.29. FA2 = False]
 "-____-"     Free Apache license: http://github.com/unslothai/unsloth

Original model: https://huggingface.co/ayan4m1/Llama3.1-8B-Sonnet

Quantized into:

  • Q8_0
  • Q6_K
  • Q5_K_M
  • Q4_K_M
  • Q3_K_M
  • Q2_K

Prompt format

<|begin_of_text|>{prompt}

Credits

Thanks to Meta, mlfoundations-dev, and Gryphe for providing the data used to create this fine-tuning.

Downloads last month
182
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.

Model tree for ayan4m1/Llama3.1-8B-Sonnet-GGUF

Quantized
(2)
this model