mrs83 commited on
Commit
5fbb6a4
·
verified ·
1 Parent(s): 9e06c72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -19,11 +19,7 @@ tags:
19
 
20
  ![Training Loss](./train_loss.png)
21
 
22
- This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
23
-
24
- The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/).
25
-
26
- ### Evaluation Results (Accuracy)
27
 
28
  - **MBPP**: 25.80 %
29
  - **HumanEval**: 37.81 %
@@ -31,13 +27,17 @@ The adapter and benchmark results have been submitted to the [FlowerTune LLM Cod
31
  - **MultiPL-E (C++)**: 32.92 %
32
  - **Average**: 34.38 %
33
 
34
- ### Communication Budget
35
 
36
  8766.51 MB Megabytes
37
 
38
  ## Model Details
39
 
40
- Please check the following GitHub project for model details and evaluation results:
 
 
 
 
41
 
42
  [https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/](https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/)
43
 
 
19
 
20
  ![Training Loss](./train_loss.png)
21
 
22
+ ## Evaluation Results (Accuracy)
 
 
 
 
23
 
24
  - **MBPP**: 25.80 %
25
  - **HumanEval**: 37.81 %
 
27
  - **MultiPL-E (C++)**: 32.92 %
28
  - **Average**: 34.38 %
29
 
30
+ ## Communication Budget
31
 
32
  8766.51 MB Megabytes
33
 
34
  ## Model Details
35
 
36
+ This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
37
+
38
+ The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/).
39
+
40
+ Please check the following GitHub project for details on how to reproduce training and evaluation steps:
41
 
42
  [https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/](https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/)
43