Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MiniPLM-llama3.1-212M - GGUF - Model creator: https://huggingface.co/MiniLLM/ - Original model: https://huggingface.co/MiniLLM/MiniPLM-llama3.1-212M/ | Name | Quant method | Size | | ---- | ---- | ---- | | [MiniPLM-llama3.1-212M.Q2_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q2_K.gguf) | Q2_K | 0.12GB | | [MiniPLM-llama3.1-212M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.IQ3_XS.gguf) | IQ3_XS | 0.13GB | | [MiniPLM-llama3.1-212M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.IQ3_S.gguf) | IQ3_S | 0.13GB | | [MiniPLM-llama3.1-212M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q3_K_S.gguf) | Q3_K_S | 0.13GB | | [MiniPLM-llama3.1-212M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.IQ3_M.gguf) | IQ3_M | 0.13GB | | [MiniPLM-llama3.1-212M.Q3_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q3_K.gguf) | Q3_K | 0.13GB | | [MiniPLM-llama3.1-212M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q3_K_M.gguf) | Q3_K_M | 0.13GB | | [MiniPLM-llama3.1-212M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q3_K_L.gguf) | Q3_K_L | 0.14GB | | [MiniPLM-llama3.1-212M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.IQ4_XS.gguf) | IQ4_XS | 0.14GB | | [MiniPLM-llama3.1-212M.Q4_0.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q4_0.gguf) | Q4_0 | 0.14GB | | [MiniPLM-llama3.1-212M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.IQ4_NL.gguf) | IQ4_NL | 0.14GB | | [MiniPLM-llama3.1-212M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q4_K_S.gguf) | Q4_K_S | 0.14GB | | [MiniPLM-llama3.1-212M.Q4_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q4_K.gguf) | Q4_K | 0.15GB | | [MiniPLM-llama3.1-212M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q4_K_M.gguf) | Q4_K_M | 0.15GB | | [MiniPLM-llama3.1-212M.Q4_1.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q4_1.gguf) | Q4_1 | 0.15GB | | [MiniPLM-llama3.1-212M.Q5_0.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q5_0.gguf) | Q5_0 | 0.16GB | | [MiniPLM-llama3.1-212M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q5_K_S.gguf) | Q5_K_S | 0.16GB | | [MiniPLM-llama3.1-212M.Q5_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q5_K.gguf) | Q5_K | 0.16GB | | [MiniPLM-llama3.1-212M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q5_K_M.gguf) | Q5_K_M | 0.16GB | | [MiniPLM-llama3.1-212M.Q5_1.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q5_1.gguf) | Q5_1 | 0.16GB | | [MiniPLM-llama3.1-212M.Q6_K.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q6_K.gguf) | Q6_K | 0.17GB | | [MiniPLM-llama3.1-212M.Q8_0.gguf](https://huggingface.co/RichardErkhov/MiniLLM_-_MiniPLM-llama3.1-212M-gguf/blob/main/MiniPLM-llama3.1-212M.Q8_0.gguf) | Q8_0 | 0.22GB | Original model description: --- library_name: transformers license: apache-2.0 datasets: - monology/pile-uncopyrighted - MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5 language: - en metrics: - accuracy pipeline_tag: text-generation --- # MiniPLM-llama3.1-212M [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM) **MiniPLM-llama3.1-212M** is a 212M model with the [LLaMA3.1 achitecture](https://arxiv.org/abs/2407.21783) pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model. This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families. We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.