Update README.md
Browse files
README.md
CHANGED
@@ -66,10 +66,15 @@ configs:
|
|
66 |
---
|
67 |
|
68 |
|
69 |
-
# ποΈ Long Code Arena (Project-
|
70 |
-
This is the benchmark for Project-
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
-
## How to load the dataset
|
73 |
```
|
74 |
from datasets import load_dataset
|
75 |
|
@@ -82,27 +87,29 @@ config_names = [
|
|
82 |
|
83 |
ds = load_dataset('JetBrains-Research/lca-code-completion', config_name, split='test')
|
84 |
```
|
85 |
-
##
|
|
|
|
|
86 |
|
87 |
-
* `repo` β repository name in format `{GitHub_user_name}__{repository_name}`
|
88 |
-
* `commit_hash` β commit hash
|
89 |
* `completion_file` β dictionary with the completion file content in the following format:
|
90 |
-
* `filename` β
|
91 |
-
* `content` β content of the completion file
|
92 |
-
* `completion_lines` β dictionary where keys are
|
93 |
-
* `committed` β line contains at least one function or class that
|
94 |
-
* `inproject` β line contains at least one function or class
|
95 |
-
* `infile` β line contains at least one function or class
|
96 |
-
* `common` β line contains at least one function or class
|
97 |
-
* `non_informative` β line that was classified to be non-informative, e.g
|
98 |
-
* `random` β
|
99 |
-
* `repo_snapshot` β dictionary with a snapshot of the repository before the commit.
|
100 |
-
* `completion_lines_raw` β
|
101 |
|
102 |
## How we collected the data
|
103 |
|
104 |
To collect the data, we cloned repositories from GitHub where the main language is Python.
|
105 |
-
The completion file for each
|
106 |
The state of the repository before this commit is the repo snapshot.
|
107 |
|
108 |
The dataset configurations are based on the number of characters in `.py` files from the repository snapshot:
|
@@ -159,4 +166,4 @@ The dataset configurations are based on the number of characters in `.py` files
|
|
159 |
|
160 |
|
161 |
## Scores
|
162 |
-
[
|
|
|
66 |
---
|
67 |
|
68 |
|
69 |
+
# ποΈ Long Code Arena (Project-level code completion)
|
70 |
+
This is the benchmark for Project-level code completion task as part of the [ποΈ Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
|
71 |
+
Each datapoint contains the file for completion, a list of lines to complete with their categories (see the categorization below), and a repository snapshot that can be used to build the context.
|
72 |
+
All the repositories are published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.
|
73 |
+
|
74 |
+
## How-to
|
75 |
+
|
76 |
+
Load the data via [load_dataset](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
|
77 |
|
|
|
78 |
```
|
79 |
from datasets import load_dataset
|
80 |
|
|
|
87 |
|
88 |
ds = load_dataset('JetBrains-Research/lca-code-completion', config_name, split='test')
|
89 |
```
|
90 |
+
## Dataset Structure
|
91 |
+
|
92 |
+
Datapoints in the dataset have the following structure:
|
93 |
|
94 |
+
* `repo` β repository name in the format `{GitHub_user_name}__{repository_name}`
|
95 |
+
* `commit_hash` β commit hash of the repository
|
96 |
* `completion_file` β dictionary with the completion file content in the following format:
|
97 |
+
* `filename` β path to the completion file
|
98 |
+
* `content` β content of the completion file
|
99 |
+
* `completion_lines` β dictionary where the keys are categories of lines and values are a list of integers (numbers of lines to complete). The categories are:
|
100 |
+
* `committed` β line contains at least one function or class from the files that were added on the completion file commit
|
101 |
+
* `inproject` β line contains at least one function or class from the repository snapshot at the moment of completion
|
102 |
+
* `infile` β line contains at least one function or class from the completion file
|
103 |
+
* `common` β line contains at least one function or class with common names, e.g., `main`, `get`, etc.
|
104 |
+
* `non_informative` β line that was classified to be non-informative, e.g., too short, contains comments, etc.
|
105 |
+
* `random` β other lines.
|
106 |
+
* `repo_snapshot` β dictionary with a snapshot of the repository before the commit. It has the same structure as `completion_file`, but filenames and contents are orginized as lists.
|
107 |
+
* `completion_lines_raw` β same as `completion_lines`, but before sampling
|
108 |
|
109 |
## How we collected the data
|
110 |
|
111 |
To collect the data, we cloned repositories from GitHub where the main language is Python.
|
112 |
+
The completion file for each datapoint is a `.py` file that was added to the repository in a commit.
|
113 |
The state of the repository before this commit is the repo snapshot.
|
114 |
|
115 |
The dataset configurations are based on the number of characters in `.py` files from the repository snapshot:
|
|
|
166 |
|
167 |
|
168 |
## Scores
|
169 |
+
You can find the results of running various models on this dataset in our [leaderboard](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
|