Add model in GGUF format for inference in llama.cpp.
This is the 110M parameter Llama 2 architecture model trained on the TinyStories dataset. These are converted from karpathy/tinyllamas. See the llama2.c project for more details.
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for deniskirbaba/tinyllama-110M-F16-GGUF
Base model
nickypro/tinyllama-110M