loubnabnl HF Staff commited on
Commit
e0294a1
·
1 Parent(s): 922f353

update eval

Browse files
Files changed (1) hide show
  1. evaluation/intro.txt +9 -4
evaluation/intro.txt CHANGED
@@ -1,5 +1,5 @@
1
  A popular evaluation framework for code generation models is the [pass@k](https://huggingface.co/metrics/code_eval) metric on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, and a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported.
2
- In most papers, 200 candidate program completions are sampled, and pass@1, pass@10, and pass@100 are computed using an unbiased sampling estimator. The table below shows the HumanEval scores of CodeParrot, InCoder, GPT-neo, GPT-J and Codex (not open-source).
3
 
4
 
5
  | Model | pass@1 | pass@10 | pass@100|
@@ -9,12 +9,17 @@ In most papers, 200 candidate program completions are sampled, and pass@1, pass@
9
  |||||
10
  |InCoder (6.7B) | 15.2% | 27.8% | 47.00% |
11
  |||||
 
 
 
 
 
 
 
 
12
  |Codex (25M)| 3.21% | 7.1% | 12.89%|
13
  |Codex (300M)| 13.17%| 20.37% | 36.27% |
14
  |Codex (12B)| 28.81%| 46.81% | 72.31% |
15
- |||||
16
- |GPT-neo (1.5B)| 4.79% | 7.47% | 16.30% |
17
- |GPT-J (6B)| 11.62% | 15.74% | 27.74% |
18
 
19
  We can load HumanEval dataset and pass@k metric from 🤗 [`datasets`](https://huggingface.co/docs/datasets/index)
20
 
 
1
  A popular evaluation framework for code generation models is the [pass@k](https://huggingface.co/metrics/code_eval) metric on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, and a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported.
2
+ In most papers, 200 candidate program completions are sampled, and pass@1, pass@10, and pass@100 are computed using an unbiased sampling estimator. The table below shows the HumanEval scores of CodeParrot, InCoder, PolyCoder, CodeGen and Codex (not open-source).
3
 
4
 
5
  | Model | pass@1 | pass@10 | pass@100|
 
9
  |||||
10
  |InCoder (6.7B) | 15.2% | 27.8% | 47.00% |
11
  |||||
12
+ |PolyCoder (160M)| 2.13% | 3.35% | 4.88% |
13
+ |PolyCoder (400M)| 2.96% | 5.29% | 11.59% |
14
+ |PolyCoder (2.7B)| 5.59% | 9.84% | 17.68% |
15
+ |||||
16
+ |CodeGen-Mono (350M)| 12.76% | 23.11% | 35.19% |
17
+ |CodeGen-Mono (2.7B)| 23.70% | 36.64% | 57.01% |
18
+ |CodeGen-Mono (16.1B)| **29.28%** | **49.86%** | **75.00%** |
19
+ |||||
20
  |Codex (25M)| 3.21% | 7.1% | 12.89%|
21
  |Codex (300M)| 13.17%| 20.37% | 36.27% |
22
  |Codex (12B)| 28.81%| 46.81% | 72.31% |
 
 
 
23
 
24
  We can load HumanEval dataset and pass@k metric from 🤗 [`datasets`](https://huggingface.co/docs/datasets/index)
25