Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
File size: 12,092 Bytes
aff5df5
 
3f132f9
aff5df5
 
 
 
 
 
 
 
 
 
 
 
8885705
27f0648
 
 
 
 
 
 
 
aff5df5
 
 
3f132f9
9948596
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
674e7b5
 
 
 
 
 
 
 
 
 
 
 
 
 
aff5df5
3f132f9
aff5df5
 
 
3f132f9
9948596
 
 
d74cc51
 
 
 
 
 
 
 
7079c26
 
 
 
d74cc51
 
ee669e5
 
220a7c0
d74cc51
220a7c0
d74cc51
ee669e5
d74cc51
ee669e5
d74cc51
ee669e5
d74cc51
27f0648
aff5df5
8885705
9660c7f
aff5df5
a02d202
8885705
40b9782
3f132f9
40b9782
9f59020
 
58dcef8
40b9782
3f132f9
 
40b9782
3f132f9
 
40b9782
3f132f9
19c2217
8885705
 
3f132f9
40b9782
3f132f9
40b9782
3f132f9
8885705
3f132f9
40b9782
3f132f9
40b9782
3f132f9
 
 
40b9782
 
 
8885705
 
 
 
 
 
 
 
40b9782
8885705
40b9782
8885705
 
 
 
 
 
40b9782
 
 
3f132f9
40b9782
 
 
 
 
58dcef8
40b9782
 
 
58dcef8
40b9782
 
3f132f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40b9782
3f132f9
 
d74cc51
 
 
 
 
 
 
 
 
 
 
 
 
7079c26
d74cc51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ee669e5
 
 
 
 
 
 
 
 
 
 
 
220a7c0
ee669e5
 
 
 
 
 
 
 
 
 
 
 
3f132f9
 
 
 
 
 
 
 
 
ee669e5
3f132f9
 
 
 
 
 
 
40b9782
 
 
8885705
 
 
 
 
 
 
a02d202
 
40b9782
a02d202
8885705
40b9782
 
 
 
 
 
 
 
3f132f9
40b9782
 
 
 
 
 
 
9847ee9
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
---
dataset_info:
- config_name: default
  features:
  - name: hash
    dtype: string
  - name: repo
    dtype: string
  - name: date
    dtype: string
  - name: license
    dtype: string
  - name: message
    dtype: string
  - name: mods
    list:
    - name: change_type
      dtype: string
    - name: old_path
      dtype: string
    - name: new_path
      dtype: string
    - name: diff
      dtype: string
  splits:
  - name: test
    num_examples: 163
- config_name: labels
  features:
  - name: hash
    dtype: string
  - name: repo
    dtype: string
  - name: date
    dtype: string
  - name: license
    dtype: string
  - name: message
    dtype: string
  - name: label
    dtype: int8
  - name: comment
    dtype: string
  splits:
  - name: test
    num_bytes: 272359
    num_examples: 858
- config_name: retrieval_bm25
  features:
  - name: hash
    dtype: string
  - name: repo
    dtype: string
  - name: mods
    dtype: string
  - name: context
    list:
    - name: source
      dtype: string
    - name: content
      dtype: string
configs:
- config_name: default
  data_files:
  - split: test
    path: commitchronicle-py-long/test-*
- config_name: labels
  data_files:
  - split: test
    path: commitchronicle-py-long-labels/test-*
- config_name: full_files
  data_files:
  - split: 4k
    path: context/files/files_4k.parquet
  - split: 8k
    path: context/files/files_8k.parquet
  - split: 16k
    path: context/files/files_16k.parquet
  - split: 32k
    path: context/files/files_32k.parquet
  - split: 64k
    path: context/files/files_64k.parquet
  - split: full
    path: context/files/files_full.parquet
- config_name: retrieval_bm25
  data_files:
  - split: 4k
    path: context/retrieval/bm25_4k.parquet
  - split: 8k
    path: context/retrieval/bm25_8k.parquet
  - split: 16k
    path: context/retrieval/bm25_16k.parquet
  - split: 32k
    path: context/retrieval/bm25_32k.parquet
  - split: 64k
    path: context/retrieval/bm25_64k.parquet
license: apache-2.0
---

# ๐ŸŸ๏ธ Long Code Arena (Commit message generation)

This is the benchmark for the Commit message generation task as part of the
๐ŸŸ๏ธ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).

The dataset is a manually curated subset of the Python test set from the ๐Ÿค— [CommitChronicle dataset](https://huggingface.co/datasets/JetBrains-Research/commit-chronicle), tailored for larger commits. 

All the repositories are published under permissive licenses (MIT, Apache-2.0, and BSD-3-Clause). The datapoints can be removed upon request.

## How-to

```py
from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-cmg", split="test")
```

Note that all the data we have is considered to be in the test split.

**Note.** Working with git repositories
under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory is not supported
via ๐Ÿค— Datasets. See [Git Repositories](#git-repositories) section for more details.

## About

### Overview

In total, there are 163 commits from 34 repositories. For length statistics, refer to the [notebook](https://github.com/JetBrains-Research/lca-baselines/blob/main/commit_message_generation/notebooks/cmg_data_stats.ipynb) in our repository.

### Dataset Structure

The dataset contains two kinds of data: data about each commit (under [`commitchronicle-py-long`](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation/tree/main/commitchronicle-py-long) folder) and compressed git repositories (under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation/tree/main/repos) folder).

#### Commits

Each example has the following fields:

| **Field** |              **Description**              |
|:---------:|:-----------------------------------------:|
|  `repo`   |            Commit repository.             |
|  `hash`   |               Commit hash.                |
|  `date`   |               Commit date.                |
| `license` |       Commit repository's license.        |
| `message` |              Commit message.              |
|  `mods`   | List of file modifications from a commit. |

Each file modification has the following fields:

|   **Field**   |                                          **Description**                                          |
|:-------------:|:-------------------------------------------------------------------------------------------------:|
| `change_type` | Type of change to current file. One of: `ADD`, `COPY`, `RENAME`, `DELETE`, `MODIFY` or `UNKNOWN`. |
|  `old_path`   |                           Path to file before change (might be empty).                            |
|  `new_path`   |                            Path to file after change (might be empty).                            |
|    `diff`     |                                   `git diff` for current file.                                    |

Data point example:

```json
{'hash': 'b76ed0db81b3123ede5dc5e5f1bddf36336f3722',
 'repo': 'apache/libcloud',
 'date': '05.03.2022 17:52:34',
 'license': 'Apache License 2.0',
 'message': 'Add tests which verify that all OpenStack driver can be instantiated\nwith all the supported auth versions.\nNOTE: Those tests will fail right now due to the regressions being\nintroduced recently which breaks auth for some versions.',
 'mods': [{'change_type': 'MODIFY',
    'new_path': 'libcloud/test/compute/test_openstack.py',
    'old_path': 'libcloud/test/compute/test_openstack.py',
    'diff': '@@ -39,6 +39,7 @@ from libcloud.utils.py3 import u\n<...>'}],
}    
```

#### Git Repositories

The compressed Git repositories for all the commits in this benchmark are stored under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory.

Working with git repositories under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory is not supported directly via ๐Ÿค— Datasets. 
You can use [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/index) package to download the repositories. The sample code is provided below:

```py
import tarfile
from huggingface_hub import list_repo_tree, hf_hub_download


data_dir = "..."  # replace with a path to where you want to store repositories locally

for repo_file in list_repo_tree("JetBrains-Research/lca-commit-message-generation", "repos", repo_type="dataset"):
    file_path = hf_hub_download(
        repo_id="JetBrains-Research/lca-commit-message-generation",
        filename=repo_file.path,
        repo_type="dataset",
        local_dir=data_dir,
    )

    with tarfile.open(file_path, "r:gz") as tar:
        tar.extractall(path=os.path.join(data_dir, "extracted_repos"))
```

For convenience, we also provide a full list of files in [`paths.json`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/blob/main/paths.json).

After you download and extract the repositories, you can work with each repository either via Git or via Python libraries like [GitPython](https://github.com/gitpython-developers/GitPython) or [PyDriller](https://github.com/ishepard/pydriller).

# Extra: longer context

## Full Files

To facilitate further research, we additionally provide full contents of modified files before and after each commit in `full_files` dataset config. `full` split provides the whole files, and the remaining splits truncates each file
given the maximum allowed number of tokens n. The files are truncated uniformly, essentially, limiting the number of tokens for each file to max_num_tokens // num_files.
We use [DeepSeek-V3 tokenizer](https://huggingface.co/deepseek-ai/DeepSeek-V3) to obtain the number of tokens. 

```py
from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", 
                       "full_files",
                       split="16k"  # should be one of: '4k', '8k', '16k', '32k', '64k', 'full'
                       )
```

Each example has the following fields:

* `repo`: commit repository
* `hash`: commit hash
* `mods`: commit modification (combined into a single diff) 
* `files`: a list of dictionaries, where each corresponds to a specific file changed in the commit and has the following keys:
  * `old_path`: file path before the commit
  * `old_contents`: file contents before the commit
  * `new_path`: file path after the commit
  * `old_contents`: file contents after the commit

## Retrieval

To facilitate further research, we additionally provide context for each commit as retrieved by BM25 retriever in `retrieval_bm25` dataset config. For each commit, we run BM25 over all `.py` files in the corresponding repository 
at the state before the commit (excluding the files that were changed in this commit). We retrieve up to 50 files most relevant to the commit diff, and then, given the maximum allowed number of tokens n, we add files until the total context length (including diff) 
in tokens returned by the [DeepSeek-V3 tokenizer](https://huggingface.co/deepseek-ai/DeepSeek-V3) exceeds n, possibly trunctating the last included file. 

To access these, run the following:

```py
from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", 
                       "retrieval_bm25",
                       split="16k"  # should be one of: '4k', '8k', '16k', '32k', '64k'
                       )
```

Each example has the following fields:

* `repo`: commit repository
* `hash`: commit hash
* `mods`: commit modification (combined into a single diff) 
* `context`: context retrieved for the current commit; a list of dictionaries, where each corresponds to a specific file and has the following keys:
  * `source`: file path
  * `content`: file content

# ๐Ÿท๏ธ Extra: commit labels

To facilitate further research, we additionally provide the manual labels for all the 858 commits that made it through initial filtering. The final version of the dataset described above consists of commits labeled either 4 or 5.

## How-to

```py
from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", "labels", split="test")
```

Note that all the data we have is considered to be in the test split.

## About

### Dataset Structure

Each example has the following fields:

| **Field** |                          **Description**                           |
|:---------:|:------------------------------------------------------------------:|
|  `repo`   |                         Commit repository.                         |
|  `hash`   |                            Commit hash.                            |
|  `date`   |                            Commit date.                            |
| `license` |                    Commit repository's license.                    |
| `message` |                          Commit message.                           |
|  `label`  |         Label of the current commit as a target for CMG task.          |
| `comment` | Comment for a label for the current commit (optional, might be empty). |

Labels are in 1โ€“5 scale, where:

* 1 โ€“ strong no
* 2 โ€“ weak no
* 3 โ€“ unsure
* 4 โ€“ weak yes
* 5 โ€“ strong yes

Data point example:

```json
{'hash': '1559a4c686ddc2947fc3606e1c4279062cc9480f',
 'repo': 'appscale/gts',
 'date': '15.07.2018 21:00:39',
 'license': 'Apache License 2.0',
 'message': 'Add auto_id_policy and logs_path flags\n\nThese changes were introduced in the 1.7.5 SDK.',
 'label': 1,
 'comment': 'no way to know the version'}
```

## Citing
```
@article{bogomolov2024long,
  title={Long Code Arena: a Set of Benchmarks for Long-Context Code Models},
  author={Bogomolov, Egor and Eliseeva, Aleksandra and Galimzyanov, Timur and Glukhov, Evgeniy and Shapkin, Anton and Tigina, Maria and Golubev, Yaroslav and Kovrigin, Alexander and van Deursen, Arie and Izadi, Maliheh and Bryksin, Timofey},
  journal={arXiv preprint arXiv:2406.11612},
  year={2024}
}
```
You can find the paper [here](https://arxiv.org/abs/2406.11612).