Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
Areyde commited on
Commit
4d862bd
Β·
verified Β·
1 Parent(s): 74bd9a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -19
README.md CHANGED
@@ -66,10 +66,15 @@ configs:
66
  ---
67
 
68
 
69
- # 🏟️ Long Code Arena (Project-Level Code Completion)
70
- This is the benchmark for Project-Level Code Completion task as part of [🏟️ Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
 
 
 
 
 
 
71
 
72
- ## How to load the dataset
73
  ```
74
  from datasets import load_dataset
75
 
@@ -82,27 +87,29 @@ config_names = [
82
 
83
  ds = load_dataset('JetBrains-Research/lca-code-completion', config_name, split='test')
84
  ```
85
- ## Data Point Structure
 
 
86
 
87
- * `repo` – repository name in format `{GitHub_user_name}__{repository_name}`
88
- * `commit_hash` – commit hash
89
  * `completion_file` – dictionary with the completion file content in the following format:
90
- * `filename` – filepath to the completion file
91
- * `content` – content of the completion file
92
- * `completion_lines` – dictionary where keys are classes of lines and values are a list of integers (numbers of lines to complete). The classes are:
93
- * `committed` – line contains at least one function or class that was declared in the committed files from `commit_hash`
94
- * `inproject` – line contains at least one function or class that was declared in the project (excluding previous)
95
- * `infile` – line contains at least one function or class that was declared in the completion file (excluding previous)
96
- * `common` – line contains at least one function or class that was classified to be common, e.g., `main`, `get`, etc (excluding previous)
97
- * `non_informative` – line that was classified to be non-informative, e.g. too short, contains comments, etc
98
- * `random` – randomly sampled from the rest of the lines
99
- * `repo_snapshot` – dictionary with a snapshot of the repository before the commit. Has the same structure as `completion_file`, but filenames and contents are orginized as lists.
100
- * `completion_lines_raw` – the same as `completion_lines`, but before sampling.
101
 
102
  ## How we collected the data
103
 
104
  To collect the data, we cloned repositories from GitHub where the main language is Python.
105
- The completion file for each data point is a `.py` file that was added to the repository in a commit.
106
  The state of the repository before this commit is the repo snapshot.
107
 
108
  The dataset configurations are based on the number of characters in `.py` files from the repository snapshot:
@@ -159,4 +166,4 @@ The dataset configurations are based on the number of characters in `.py` files
159
 
160
 
161
  ## Scores
162
- [HF Space](https://huggingface.co/spaces/JetBrains-Research/long-code-arena)
 
66
  ---
67
 
68
 
69
+ # 🏟️ Long Code Arena (Project-level code completion)
70
+ This is the benchmark for Project-level code completion task as part of the [🏟️ Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
71
+ Each datapoint contains the file for completion, a list of lines to complete with their categories (see the categorization below), and a repository snapshot that can be used to build the context.
72
+ All the repositories are published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.
73
+
74
+ ## How-to
75
+
76
+ Load the data via [load_dataset](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
77
 
 
78
  ```
79
  from datasets import load_dataset
80
 
 
87
 
88
  ds = load_dataset('JetBrains-Research/lca-code-completion', config_name, split='test')
89
  ```
90
+ ## Dataset Structure
91
+
92
+ Datapoints in the dataset have the following structure:
93
 
94
+ * `repo` – repository name in the format `{GitHub_user_name}__{repository_name}`
95
+ * `commit_hash` – commit hash of the repository
96
  * `completion_file` – dictionary with the completion file content in the following format:
97
+ * `filename` – path to the completion file
98
+ * `content` – content of the completion file
99
+ * `completion_lines` – dictionary where the keys are categories of lines and values are a list of integers (numbers of lines to complete). The categories are:
100
+ * `committed` – line contains at least one function or class from the files that were added on the completion file commit
101
+ * `inproject` – line contains at least one function or class from the repository snapshot at the moment of completion
102
+ * `infile` – line contains at least one function or class from the completion file
103
+ * `common` – line contains at least one function or class with common names, e.g., `main`, `get`, etc.
104
+ * `non_informative` – line that was classified to be non-informative, e.g., too short, contains comments, etc.
105
+ * `random` – other lines.
106
+ * `repo_snapshot` – dictionary with a snapshot of the repository before the commit. It has the same structure as `completion_file`, but filenames and contents are orginized as lists.
107
+ * `completion_lines_raw` – same as `completion_lines`, but before sampling
108
 
109
  ## How we collected the data
110
 
111
  To collect the data, we cloned repositories from GitHub where the main language is Python.
112
+ The completion file for each datapoint is a `.py` file that was added to the repository in a commit.
113
  The state of the repository before this commit is the repo snapshot.
114
 
115
  The dataset configurations are based on the number of characters in `.py` files from the repository snapshot:
 
166
 
167
 
168
  ## Scores
169
+ You can find the results of running various models on this dataset in our [leaderboard](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).