mrs83 commited on
Commit
9d87942
·
verified ·
1 Parent(s): d5351b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -1
README.md CHANGED
@@ -13,4 +13,47 @@ library_name: peft
13
  tags:
14
  - text-generation-inference
15
  - code
16
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  tags:
14
  - text-generation-inference
15
  - code
16
+ ---
17
+
18
+ # Model Card for ethicalabs/FlowerTune-Qwen2.5-Coder-0.5B-Instruct
19
+
20
+ This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
21
+
22
+ The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/).
23
+
24
+ ## Model Details
25
+
26
+ Please check the following GitHub project for model details and evaluation results (Work in Progress!!!):
27
+
28
+ [https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/](https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/)
29
+
30
+ ## How to Get Started with the Model
31
+
32
+ First, install `xlstm` and `mlstm_kernels` packages:
33
+
34
+ Use this model as:
35
+
36
+ ```
37
+ from peft import PeftModel
38
+ from transformers import AutoModelForCausalLM
39
+
40
+ base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-0.5B-Instruct")
41
+ model = PeftModel.from_pretrained(base_model, "ethicalabs/FlowerTune-Qwen2.5-Coder-0.5B-Instruct")
42
+ ```
43
+
44
+ ### Evaluation Results (Accuracy)
45
+
46
+ - **MBPP**: 21.20 %
47
+ - **HumanEval**: 36.59 %
48
+ - **MultiPL-E (JS)**: 40.38 %
49
+ - **MultiPL-E (C++)**: 33.55 %
50
+ - **Average**: 33.00 %
51
+
52
+ ### Communication Budget
53
+
54
+ 8766.51 MB Megabytes
55
+
56
+ ### Framework versions
57
+
58
+ - PEFT 0.14.0
59
+ - Flower 1.13.0