--- license: mit datasets: - lucasmccabe-lmi/CodeAlpaca-20k language: - en metrics: - accuracy base_model: - Qwen/Qwen2.5-Coder-0.5B-Instruct pipeline_tag: text-generation library_name: peft tags: - text-generation-inference - code --- # Model Card for ethicalabs/FlowerTune-Qwen2.5-Coder-0.5B-Instruct This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework. The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/). ## Model Details Please check the following GitHub project for model details and evaluation results (Work in Progress!!!): [https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/](https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/) ## How to Get Started with the Model Use this model as: ``` from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-0.5B-Instruct") model = PeftModel.from_pretrained(base_model, "ethicalabs/FlowerTune-Qwen2.5-Coder-0.5B-Instruct") ``` ### Evaluation Results (Accuracy) - **MBPP**: 21.20 % - **HumanEval**: 36.59 % - **MultiPL-E (JS)**: 40.38 % - **MultiPL-E (C++)**: 33.55 % - **Average**: 33.00 % ### Communication Budget 8766.51 MB Megabytes ### Framework versions - PEFT 0.14.0 - Flower 1.13.0