Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
ronniecao commited on
Commit
8f0fd82
·
verified ·
1 Parent(s): 9dee2ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -41
README.md CHANGED
@@ -1,15 +1,3 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - text-generation
5
- language:
6
- - en
7
- tags:
8
- - code generation
9
- size_categories:
10
- - 1K<n<10K
11
- ---
12
-
13
  # Codev-Bench
14
 
15
  ## Introduction
@@ -25,23 +13,13 @@ In detail, first, We extract unit test classes and functions from real GitHub re
25
 
26
  We split the completion sub-scenes or capabilities that users may encounter while developing in an IDE into the following parts:
27
 
28
- **Scene1**. &#9989; Completing complete code blocks (including functions, conditional logic blocks, loop logic blocks, comments, ordinary statements, etc.).
29
-
30
- - **Scene1.1**. &#9989; The context of the code block to be completed is fully complete.
31
-
32
- - **Scene1.2**. &#9989; The context of the code block to be completed has an empty body, but the external context of the function is complete.
33
-
34
- - **Scene1.3**. &#9989; The context following the code block to be completed is completely empty.
35
-
36
- **Scene2**. &#9989; Completing a portion of code within a specific code block.
37
 
38
- - **Scene2.1**. &#9989; Complete a portion of code within the code block.
39
 
40
- - **Scene2.2**. &#9989; The code block is already complete and should not have any code added.
41
 
42
- **Scene3**. &#128260; Completing code based on classes and functions defined in other files.
43
-
44
- **Scene4**. &#128260; Completing code based on related and similar code within the project.
45
 
46
 
47
  ## How To Use
@@ -105,19 +83,13 @@ If almost all the unit tests run successfully, reseachers and developers can pro
105
 
106
  We split the completion sub-scenes or capabilities as follows:
107
 
108
- **Scene1.1**: `./prompts/prefix_suffix_full_complete_current_block_no_evidence.jsonl` and `./prompts/prefix_suffix_full_complete_current_block_with_evidence.jsonl`
109
-
110
- **Scene1.2**: `./prompts/prefix_full_suffix_func_empty_complete_current_block_no_evidence.jsonl` and `./prompts/prefix_full_suffix_func_empty_complete_current_block_with_evidence.jsonl`
111
-
112
- **Scene1.3**: `./prompts/prefix_full_suffix_empty_complete_current_block_no_evidence.jsonl` and `./prompts/prefix_full_suffix_empty_complete_current_block_with_evidence.jsonl`
113
-
114
- **Scene2.1**: `./prompts/complete_current_header_inner_block_completion.jsonl`
115
 
116
- **Scene2.2**: `./prompts/complete_current_header_empty_completion.jsonl`
117
 
118
- **Scene3**: Look forward to it.
119
 
120
- **Scene4**: Look forward to it.
121
 
122
  The structure of the prompts is as follows:
123
  ```
@@ -185,15 +157,36 @@ myenv/bin/python src/evaluate.py --method evaluate_prediction --model codegemma_
185
  ```
186
 
187
  Thus, the result file `./predicts/prefix_suffix_full_complete_current_block_no_evidence/results/codegemma_7b.jsonl.x` will be generated. Then, users can use the following command to summarize the results:
188
- ```
189
- myenv/bin/python src/evaluate.py --method print_scores --model codegemma_7b
 
 
 
 
 
 
 
190
  ```
191
 
192
 
193
  ## Experimental Results
194
 
195
- ### Scene1.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196
 
197
- We evaluate some popular general LLMs and code LLMs on the sub dataset **Scene1.1** of the CodevBench dataset. The results are as follows:
198
 
199
- ![results of Scene1.1](https://github.com/LingmaTongyi/Codev-Bench/blob/main/images/prefix_suffix_full_complete_current_block_no_evidence.png?raw=true)
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Codev-Bench
2
 
3
  ## Introduction
 
13
 
14
  We split the completion sub-scenes or capabilities that users may encounter while developing in an IDE into the following parts:
15
 
16
+ > &#9989; **Scenario 1 - Full block completion**: In this scenario, the model is tasked with completing a full code block (e.g., function, if, for, try, or statement) based on a complete, unbroken surrounding context. To pass, the model must accurately complete the block and stop at the correct point, ensuring it passes the unit test successfully.
 
 
 
 
 
 
 
 
17
 
18
+ > &#9989; **Scenario 2 - Incomplete suffix completion**: Compared to Scenario 1, this scenario focuses on cases where the suffix content following the current cursor is incomplete. It covers two sub-cases: one where all the suffix content after the cursor in entire file is empty, and another where only the content within the current function body after the cursor is missing.
19
 
20
+ > &#9989; **Scenario 3 - Inner block completion**: In this scenario, the model is required to complete a portion of code block based on a complete, unbroken surrounding context. In addition, 20% of the samples in this scenario have an empty ground truth, evaluating the ability to recognize when the current block is already complete and no further completion is needed.
21
 
22
+ > &#9989; **Scenario 4 - RAG-based completion**: In this scenario, the model builds upon the full block completion task by incorporating a Retrieval-Augmented Generation (RAG) module. The repository is partitioned into chunks, with only functions being considered as candidates. The function containing the current code is used as the query, and the query’s embedding is compared with the embeddings of the candidate functions. The top 3 most similar candidates are then inserted back into the prompt as hints to guide code generation.
 
 
23
 
24
 
25
  ## How To Use
 
83
 
84
  We split the completion sub-scenes or capabilities as follows:
85
 
86
+ **Scenario 1**: `./prompts/prefix_suffix_full_complete_current_block_no_evidence.jsonl`.
 
 
 
 
 
 
87
 
88
+ **Scenario 2**: `./prompts/complete_current_header_inner_block_completion.jsonl` and `./prompts/complete_current_header_empty_completion.jsonl`.
89
 
90
+ **Scenario 3**: `./prompts/prefix_full_suffix_func_empty_complete_current_block_no_evidence.jsonl` and `./prompts/prefix_full_suffix_empty_complete_current_block_no_evidence.jsonl`.
91
 
92
+ **Scenario 4**: `./prompts/prefix_suffix_full_complete_current_block_with_repo_rag_oracle`.
93
 
94
  The structure of the prompts is as follows:
95
  ```
 
157
  ```
158
 
159
  Thus, the result file `./predicts/prefix_suffix_full_complete_current_block_no_evidence/results/codegemma_7b.jsonl.x` will be generated. Then, users can use the following command to summarize the results:
160
+ ```shell
161
+ # for scenario 1
162
+ myenv/bin/python src/evaluate.py --method print_all_scores --model codegemma_7b --mode prefix_suffix_full_complete_current_block_no_evidence
163
+ # for scenario 2
164
+ myenv/bin/python src/evaluate.py --method print_all_scores --model codegemma_7b --mode complete_current_header_inner_block_and_empty_completion
165
+ # for scenario 3
166
+ myenv/bin/python src/evaluate.py --method print_all_scores --model codegemma_7b --mode prefix_suffix_empty_current_block
167
+ # for scenario 4
168
+ myenv/bin/python src/evaluate.py --method print_all_scores --model codegemma_7b --mode prefix_suffix_full_complete_current_block_with_repo_rag_oracle
169
  ```
170
 
171
 
172
  ## Experimental Results
173
 
174
+ ### Overall Results
175
+
176
+ ![overall results](https://github.com/LingmaTongyi/Codev-Bench/raw/main/images/total.png)
177
+
178
+ ### The Results of Scenario 1
179
+
180
+ ![the results of scenario 1](https://github.com/LingmaTongyi/Codev-Bench/raw/main/images/scenario1.png)
181
+
182
+ ### The Results of Scenario 2
183
+
184
+ ![the results of scenario 2](https://github.com/LingmaTongyi/Codev-Bench/raw/main/images/scenario2.png)
185
+
186
+ ### The Results of Scenario 3
187
+
188
+ ![the results of scenario 3](https://github.com/LingmaTongyi/Codev-Bench/raw/main/images/scenario3.png)
189
 
190
+ ### The Results of Scenario 4
191
 
192
+ ![the results of scenario 4](https://github.com/LingmaTongyi/Codev-Bench/raw/main/images/scenario4.png)