--- license: mit tags: - nlp - q3 --- # Phi-4-mini-instruct-GGUF-Q3_K_M This is a 3-bit quantized GGUF of Microsoft's Phi-4-mini-instruct 3.8B parameters Small Language Model. ## Description The weights are quantized with llama.cpp to a 3-bit quantization (Q3_K_M). The Screen capture below shows the simple CLI interface, which runs in Linux Terminal locally on a computer with Intel CORE i5 CPU **without internet**. ### Executing GGUF * Simply download the "Phi-4-mini-instruct-GGUF-Q3_K_M.gguf" file or clone the repository ``` git clone https://huggingface.co/harisnaeem/Phi-4-mini-instruct-GGUF-Q3_K_M ``` * To run this Language Model in a simple CLI interface, provide the directory path to "llama-cli" and "Phi-4-mini-instruct-GGUF-Q3_K_M.gguf" in the Terminal. ``` (Path to llama CLI)/llama.cpp/build/bin/llama-cli --color --conversation --model (Path to model GGUF file)/Phi-4-mini-instruct-GGUF-Q3_K_M.gguf ``` ## Example ![Phi-4-mini-instruct-GGUF-Q3_K_M](Phi-4-mini-instruct-GGUF-Q3_K_M_llama_CLI.png?raw=true)