--- license: other language: - code - en task_categories: - text-generation - summarization tags: - code - commit_message_generation pretty_name: CommitChronicle size_categories: - 1M<n<10M dataset_info: - config_name: default features: - name: author dtype: int64 - name: date dtype: string - name: timezone dtype: int64 - name: hash dtype: string - name: message dtype: string - name: mods list: - name: change_type dtype: string - name: old_path dtype: string - name: new_path dtype: string - name: diff dtype: string - name: language dtype: string - name: license dtype: string - name: repo dtype: string - name: original_message dtype: string splits: - name: test num_bytes: 5760117409 num_examples: 1486267 - name: train num_bytes: 30084265848 num_examples: 7659458 - name: validation num_bytes: 5905326070 num_examples: 1554042 download_size: 14168436205 dataset_size: 41749709327 - config_name: subset_cmg features: - name: author dtype: int64 - name: date dtype: string - name: timezone dtype: int64 - name: hash dtype: string - name: message dtype: string - name: mods list: - name: change_type dtype: string - name: old_path dtype: string - name: new_path dtype: string - name: diff dtype: string - name: language dtype: string - name: license dtype: string - name: repo dtype: string - name: original_message dtype: string splits: - name: test num_bytes: 772774959 num_examples: 204336 download_size: 258151047 dataset_size: 772774959 - config_name: subset_llm features: - name: author dtype: int64 - name: date dtype: string - name: timezone dtype: int64 - name: hash dtype: string - name: message dtype: string - name: mods list: - name: change_type dtype: string - name: old_path dtype: string - name: new_path dtype: string - name: diff dtype: string - name: language dtype: string - name: license dtype: string - name: repo dtype: string - name: original_message dtype: string splits: - name: test num_bytes: 15121048 num_examples: 4025 download_size: 5068039 dataset_size: 15121048 configs: - config_name: default data_files: - split: test path: data/test-* - split: train path: data/train-* - split: validation path: data/validation-* - config_name: subset_cmg data_files: - split: test path: subset_cmg/test-* - config_name: subset_llm data_files: - split: test path: subset_llm/test-* --- # 📜 CommitChronicle 🔮 This is the dataset for commit message generation (and/or completion), introduced in the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023. Its key features: * *large-scale and multilingual*: contains 10.7M commits from 11.9k GitHub repositories in 20 programming languages; * *diverse*: avoids restrictive filtering on commit messages or commit diffs structure; * *suitable for experiments with commit history*: provides metadata about commit authors and dates and uses split-by-project. ## Dataset Creation > 🔍 For further details, please refer to: > * **Paper**: [https://arxiv.org/abs/2308.07655](https://arxiv.org/abs/2308.07655) > * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation) We used [GitHub Search](https://seart-ghs.si.usi.ch/) tool and official GitHub API to select relevant repositories with permissive licenses (Apache, BSD 3-clause, MIT). On February 9th, 2023, we collected all commits made since 2017 from these repositories via [PyDriller](https://github.com/ishepard/pydriller). Next, we extensively cleaned the data, including filtering outliers, dropping commits from bot authors, and dropping duplicates. Note: to avoid disclosing personal information, we replaced the commit authors' names and emails with unique identifiers. ## Dataset Structure ### Data Instances Each data instance in the dataset is a commit. [A commit example](https://github.com/saridormi/commit_chronicle/commit/a7fb3b64184f0af5b08285cce14b9139baa94049) would look like the following: ``` { 'repo': 'saridormi/commit_chronicle', 'hash': 'a7fb3b64184f0af5b08285cce14b9139baa94049', 'author': 123, 'date': '05.07.2021 15:10:07', 'timezone': 0, 'license': 'MIT License', 'language': 'Jupyter Notebook', 'message': 'Add license badge to readme', 'original_message': 'Add license badge to readme', 'mods': [{'change_type': 'MODIFY', 'new_path': 'README.md', 'old_path': 'README.md' 'diff': '@@ -1,6 +1,6 @@\n' ' # Commits dataset\n' ' \n' '-> :heavy_exclamation_mark: **TODO:** license\n' '+\n'}], } ``` ### Data Fields Each example has the following fields: | **Field** | **Description** | |:------------------:|:----------------------------------------:| | `repo` | Commit repository. | | `hash` | Commit hash. | | `author` | Unique id for commit author | | `date` | Commit date (from author). | | `timezone` | Commit timezone (from author). | | `license` | Commit repository's license. | | `language` | Commit repository's main language. | | `message` | Commit message (after processing). | | `original_message` | Commit message (without any processing). | | `mods` | List of file modifications from commit. | Each file modification has the following fields: | **Field** | **Description** | |:-------------:|:-------------------------------------------------------------------------------------------------:| | `change_type` | Type of change to current file. One of: `ADD`, `COPY`, `RENAME`, `DELETE`, `MODIFY` or `UNKNOWN`. | | `old_path` | Path to file before change (might be empty). | | `new_path` | Path to file after change (might be empty). | | `diff` | `git diff` for current file. | ### Data Splits We provide the following configurations: * `default` * `train`: full training split (7.66M commits) * `validation`: full validation split (1.55M commits) * `test`: full test split (1.49M commits) * `subset_cmg` * `test`: test subset used for experiments with CMG approaches (204k commits) * `subset_llm` * `test`: test subset used for experiments with a LLM (4k commits) ## Considerations for Using the Data > Adopted from [the Stack](https://huggingface.co/datasets/bigcode/the-stack). The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their open-access research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. The dataset is a collection of commits from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. ## Citation ``` TODO ```