4bit GGUF quantization of TinyLlama-1.1B-intermediate-step-955k-token-2T
I used this script to generate the file using this command:
python make-ggml.py ~/ooba/models/TinyLlama_TinyLlama-1.1B-intermediate-step-955k-token-2T/ --model_type=llama --quants=Q4_K_M
The original model is so small that there is only one safetensors file named model.safetensors
, so I had to change that filename to model-00001-of-00001.safetensors
to make the script load the model properly.
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.