|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- text-generation |
|
size_categories: |
|
- 10M<n<100M |
|
--- |
|
|
|
# Dataset Card for Python-Text2Code |
|
|
|
- **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt |
|
- **Paper [optional]:** https://aclanthology.org/2024.eacl-long.72.pdf |
|
- **Point of Contact:** [Fenia Christopoulou](mailto:[email protected]) |
|
|
|
## Dataset Description |
|
|
|
The data were crawled from existing, public repositories from GitHub before May 2021. |
|
Duplicate files based on the rowKey of each file’s MD5 were removed and files that met the following criteria were kept: |
|
(a) the file size is under 1MB; |
|
(b) the code is Python3 compatible, using Abstract Syntactic Tree (AST) parsing; |
|
(c) there are fewer than 100 characters per line on average; |
|
(d) and there are fewer than 1,000 characters in any single line. |
|
|
|
We then applied AST parsing (via [Tree-sitter](https://tree-sitter.github.io/tree-sitter/)) on the remaining Python files to extract valid functions and |
|
their corresponding docstrings. |
|
Docstrings were used a "problem descriptions" and were separated from the code. Functions without a docstring were discarded. |
|
We then replaced new lines, indentation and dedentation with `<NEW_LINE>`, `<INDENT>` and `<DEDENT>`, respectively, to normalise spaces, which effectively reduced the length |
|
of the sequences. |
|
Finally, kept instances with a maximum length of 1024 tokens (docstring+code). |
|
|
|
The final dataset contains 23,526,586 text-to-code pairs in Python. |
|
|
|
|
|
## Data Fields |
|
|
|
Each instance contains 3 fields: |
|
- `id`: Unique ID of each pair |
|
- `code`: The python code |
|
- `docstring`: The docstring/problem description |
|
|
|
|
|
## Data Splits |
|
|
|
There is a single data split in the dataset. We randomly sampled 0.1% of the dataset to serve as validation set. |
|
|
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
```html |
|
@inproceedings{christopoulou-etal-2024-text, |
|
title = "Text-to-Code Generation with Modality-relative Pre-training", |
|
author = "Christopoulou, Fenia and |
|
Zhang, Guchun and |
|
Lampouras, Gerasimos", |
|
editor = "Graham, Yvette and |
|
Purver, Matthew", |
|
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)", |
|
month = mar, |
|
year = "2024", |
|
address = "St. Julian{'}s, Malta", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://aclanthology.org/2024.eacl-long.72", |
|
pages = "1194--1208" |
|
} |
|
|
|
|
|
## Dataset Card Authors [optional] |
|
|
|
Fenia Christopoulou |
|
|