JackFram/llama-160m-GGUF

Quantized GGUF model files for llama-160m from JackFram

Name Quant method Size
llama-160m.fp16.gguf fp16 326.58 MB
llama-160m.q2_k.gguf q2_k 77.23 MB
llama-160m.q3_k_m.gguf q3_k_m 87.54 MB
llama-160m.q4_k_m.gguf q4_k_m 104.03 MB
llama-160m.q5_k_m.gguf q5_k_m 119.04 MB
llama-160m.q6_k.gguf q6_k 135.00 MB
llama-160m.q8_0.gguf q8_0 174.33 MB

Original Model Card:

Model description

This is a LLaMA-like model with only 160M parameters trained on Wikipedia and part of the C4-en and C4-realnewslike datasets.

No evaluation has been conducted yet, so use it with care.

The model is mainly developed as a base Small Speculative Model in the SpecInfer paper.

Citation

To cite the model, please use

@misc{miao2023specinfer,
      title={SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification}, 
      author={Xupeng Miao and Gabriele Oliaro and Zhihao Zhang and Xinhao Cheng and Zeyu Wang and Rae Ying Yee Wong and Zhuoming Chen and Daiyaan Arfeen and Reyna Abhyankar and Zhihao Jia},
      year={2023},
      eprint={2305.09781},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
48
GGUF
Model size
162M params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for afrideva/llama-160m-GGUF

Quantized
(5)
this model

Dataset used to train afrideva/llama-160m-GGUF