--- base_model: unsloth/qwen2.5-coder-14b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en datasets: - openai/gsm8k --- # My Reasoning Model ## System Prompt Format Respond in the following format: ``` ... ... ``` I fine-tuned the model using `openai/gsm8k`, and to ensure costs do not go insane, I used a single A100. ``` Enjoy, but please note that this model is experimental and I used it to define my pipeline. I will be testing fine tuning larger more capable models. I suspect they would add more value in the short term. --- # Uploaded model - **Developed by:** dbands - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-14b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)