TomPei commited on
Commit
5ce15f0
·
verified ·
1 Parent(s): 2e35a8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -60,11 +60,11 @@ To simplify the comparison, we chosed the Pass@1 metric for the Python language,
60
  | Model | HumanEval python pass@1 |
61
  | --- |----------------------------------------------------------------------------- |
62
  | CodeLlama-7b-hf | 30.5%|
63
- | opencsg-CodeLlama-7b-v0.1(4k) | **43.9%** |
64
  | CodeLlama-13b-hf | 36.0%|
65
- | opencsg-CodeLlama-13b-v0.1(4k) | **51.2%** |
66
  | CodeLlama-34b-hf | 48.2%|
67
- | opencsg-CodeLlama-34b-v0.1(4k)| **56.1%** |
68
 
69
  **TODO**
70
  - We will provide more benchmark scores on fine-tuned models in the future.
 
60
  | Model | HumanEval python pass@1 |
61
  | --- |----------------------------------------------------------------------------- |
62
  | CodeLlama-7b-hf | 30.5%|
63
+ | opencsg-CodeLlama-7b-v0.1 | **43.9%** |
64
  | CodeLlama-13b-hf | 36.0%|
65
+ | opencsg-CodeLlama-13b-v0.1| **51.2%** |
66
  | CodeLlama-34b-hf | 48.2%|
67
+ | opencsg-CodeLlama-34b-v0.1| **56.1%** |
68
 
69
  **TODO**
70
  - We will provide more benchmark scores on fine-tuned models in the future.