mrs83 commited on
Commit
07e8cbe
·
verified ·
1 Parent(s): a5f79fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -14
README.md CHANGED
@@ -15,15 +15,31 @@ tags:
15
  - code
16
  ---
17
 
18
- # Model Card for ethicalabs/FlowerTune-Qwen2.5-Coder-0.5B-Instruct
19
 
20
  This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
21
 
22
  The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/).
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ## Model Details
25
 
26
- Please check the following GitHub project for model details and evaluation results (Work in Progress!!!):
27
 
28
  [https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/](https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/)
29
 
@@ -39,18 +55,6 @@ base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-0.5B-Instr
39
  model = PeftModel.from_pretrained(base_model, "ethicalabs/FlowerTune-Qwen2.5-Coder-0.5B-Instruct")
40
  ```
41
 
42
- ### Evaluation Results (Accuracy)
43
-
44
- - **MBPP**: 21.20 %
45
- - **HumanEval**: 36.59 %
46
- - **MultiPL-E (JS)**: 40.38 %
47
- - **MultiPL-E (C++)**: 33.55 %
48
- - **Average**: 33.00 %
49
-
50
- ### Communication Budget
51
-
52
- 8766.51 MB Megabytes
53
-
54
  ### Framework versions
55
 
56
  - PEFT 0.14.0
 
15
  - code
16
  ---
17
 
18
+ # Model Card for FlowerTune-Qwen2.5-Coder-0.5B-Instruct-PEFT
19
 
20
  This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
21
 
22
  The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/).
23
 
24
+ ### Evaluation Results (Accuracy)
25
+
26
+ - **MBPP**: 25.80 %
27
+ - **HumanEval**: 37.81 %
28
+ - **MultiPL-E (JS)**: 41.00 %
29
+ - **MultiPL-E (C++)**: 32.92 %
30
+ - **Average**: 34.38 %
31
+
32
+ ### Communication Budget
33
+
34
+ 8766.51 MB Megabytes
35
+
36
+ ### Training Loss Plot
37
+
38
+ ![Training Loss](./train_loss.png)
39
+
40
  ## Model Details
41
 
42
+ Please check the following GitHub project for model details and evaluation results:
43
 
44
  [https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/](https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/)
45
 
 
55
  model = PeftModel.from_pretrained(base_model, "ethicalabs/FlowerTune-Qwen2.5-Coder-0.5B-Instruct")
56
  ```
57
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ### Framework versions
59
 
60
  - PEFT 0.14.0