--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/phi-4-unsloth-bnb-4bit datasets: - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B - ServiceNow-AI/R1-Distill-SFT model-index: - name: ThinkPhi1.1-Tensors results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 39.08 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 49.14 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 0.0 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 6.49 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 11.28 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.42 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors name: Open LLM Leaderboard --- # Uploaded model - **Developed by:** Quazim0t0 - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit - **GGUF** - **Trained for 4-5 Hours on A800 with the MagPie-Reasoning-V2-CoT-DeepSeek-R1-Llama-70B & ServiceNow-AI/R1-Distill-SFT.** - **5$ Training...I'm actually amazed by the results.** If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phithink/ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Quazim0t0__ThinkPhi1.1-Tensors-details) | Metric |Value| |-------------------|----:| |Avg. |24.90| |IFEval (0-Shot) |39.08| |BBH (3-Shot) |49.14| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 6.49| |MuSR (0-shot) |11.28| |MMLU-PRO (5-shot) |43.42|