parquet-converter commited on
Commit
7cd6b2c
·
1 Parent(s): b6c4585

Update parquet files

Browse files
README.md DELETED
@@ -1,186 +0,0 @@
1
- ---
2
- pretty_name: CodeXGlueCcCloneDetectionPoj104
3
- annotations_creators:
4
- - found
5
- language_creators:
6
- - found
7
- language:
8
- - code
9
- license:
10
- - c-uda
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text-retrieval
19
- task_ids:
20
- - document-retrieval
21
- dataset_info:
22
- features:
23
- - name: id
24
- dtype: int32
25
- - name: code
26
- dtype: string
27
- - name: label
28
- dtype: string
29
- splits:
30
- - name: train
31
- num_bytes: 18878686
32
- num_examples: 32000
33
- - name: validation
34
- num_bytes: 5765303
35
- num_examples: 8000
36
- - name: test
37
- num_bytes: 6852864
38
- num_examples: 12000
39
- download_size: 8658581
40
- dataset_size: 31496853
41
- ---
42
- # Dataset Card for "code_x_glue_cc_clone_detection_poj_104"
43
-
44
- ## Table of Contents
45
- - [Dataset Description](#dataset-description)
46
- - [Dataset Summary](#dataset-summary)
47
- - [Supported Tasks and Leaderboards](#supported-tasks)
48
- - [Languages](#languages)
49
- - [Dataset Structure](#dataset-structure)
50
- - [Data Instances](#data-instances)
51
- - [Data Fields](#data-fields)
52
- - [Data Splits](#data-splits-sample-size)
53
- - [Dataset Creation](#dataset-creation)
54
- - [Curation Rationale](#curation-rationale)
55
- - [Source Data](#source-data)
56
- - [Annotations](#annotations)
57
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
58
- - [Considerations for Using the Data](#considerations-for-using-the-data)
59
- - [Social Impact of Dataset](#social-impact-of-dataset)
60
- - [Discussion of Biases](#discussion-of-biases)
61
- - [Other Known Limitations](#other-known-limitations)
62
- - [Additional Information](#additional-information)
63
- - [Dataset Curators](#dataset-curators)
64
- - [Licensing Information](#licensing-information)
65
- - [Citation Information](#citation-information)
66
- - [Contributions](#contributions)
67
-
68
- ## Dataset Description
69
-
70
- - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104
71
-
72
- ### Dataset Summary
73
-
74
- CodeXGLUE Clone-detection-POJ-104 dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104
75
-
76
- Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.
77
- We use POJ-104 dataset on this task.
78
-
79
- ### Supported Tasks and Leaderboards
80
-
81
- - `document-retrieval`: The dataset can be used to train a model for retrieving top-k codes with the same semantics.
82
-
83
- ### Languages
84
-
85
- - C++ **programming** language
86
-
87
- ## Dataset Structure
88
-
89
- ### Data Instances
90
-
91
- An example of 'train' looks as follows.
92
- ```
93
- {
94
- "code": "\nint f(int shu,int min)\n{ \n int k=1;\n if(shu < min)\n { \n k= 0; \n return k;\n } \n else\n {\n for(int i = min;i<shu;i++)\n { \n if(shu%i == 0)\n { \n k=k+ f(shu/i,i); \n } \n \n \n } \n return k; \n}\n} \n\nmain()\n{\n int n,i,a;\n scanf(\"%d\",&n);\n \n for(i=0;i<n;i++)\n {\n scanf(\"%d\",&a);\n \n if(i!=n-1) \n printf(\"%d\\n\",f(a,2));\n else\n printf(\"%d\",f(a,2)); \n \n \n \n } \n \n \n }",
95
- "id": 0,
96
- "label": "home"
97
- }
98
- ```
99
-
100
- ### Data Fields
101
-
102
- In the following each data field in go is explained for each config. The data fields are the same among all splits.
103
-
104
- #### default
105
-
106
- |field name| type | description |
107
- |----------|------|----------------------------------------------|
108
- |id |int32 | Index of the sample |
109
- |code |string| The full text of the function |
110
- |label |string| The id of problem that the source code solves|
111
-
112
- ### Data Splits
113
-
114
- | name |train|validation|test |
115
- |-------|----:|---------:|----:|
116
- |default|32000| 8000|12000|
117
-
118
- ## Dataset Creation
119
-
120
- ### Curation Rationale
121
-
122
- [More Information Needed]
123
-
124
- ### Source Data
125
-
126
- #### Initial Data Collection and Normalization
127
-
128
- [More Information Needed]
129
-
130
- #### Who are the source language producers?
131
-
132
- [More Information Needed]
133
-
134
- ### Annotations
135
-
136
- #### Annotation process
137
-
138
- [More Information Needed]
139
-
140
- #### Who are the annotators?
141
-
142
- [More Information Needed]
143
-
144
- ### Personal and Sensitive Information
145
-
146
- [More Information Needed]
147
-
148
- ## Considerations for Using the Data
149
-
150
- ### Social Impact of Dataset
151
-
152
- [More Information Needed]
153
-
154
- ### Discussion of Biases
155
-
156
- [More Information Needed]
157
-
158
- ### Other Known Limitations
159
-
160
- [More Information Needed]
161
-
162
- ## Additional Information
163
-
164
- ### Dataset Curators
165
-
166
- https://github.com/microsoft, https://github.com/madlag
167
-
168
- ### Licensing Information
169
-
170
- Computational Use of Data Agreement (C-UDA) License.
171
-
172
- ### Citation Information
173
-
174
- ```
175
- @inproceedings{mou2016convolutional,
176
- title={Convolutional neural networks over tree structures for programming language processing},
177
- author={Mou, Lili and Li, Ge and Zhang, Lu and Wang, Tao and Jin, Zhi},
178
- booktitle={Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
179
- pages={1287--1293},
180
- year={2016}
181
- }
182
- ```
183
-
184
- ### Contributions
185
-
186
- Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
code_x_glue_cc_clone_detection_poj104.py DELETED
@@ -1,93 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
- from .common import TrainValidTestChild
6
- from .generated_definitions import DEFINITIONS
7
-
8
-
9
- _DESCRIPTION = """Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.
10
- We use POJ-104 dataset on this task."""
11
-
12
- _CITATION = """@inproceedings{mou2016convolutional,
13
- title={Convolutional neural networks over tree structures for programming language processing},
14
- author={Mou, Lili and Li, Ge and Zhang, Lu and Wang, Tao and Jin, Zhi},
15
- booktitle={Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
16
- pages={1287--1293},
17
- year={2016}
18
- }"""
19
-
20
-
21
- class CodeXGlueCcCloneDetectionPoj104Impl(TrainValidTestChild):
22
- _DESCRIPTION = _DESCRIPTION
23
- _CITATION = _CITATION
24
-
25
- _FEATURES = {
26
- "id": datasets.Value("int32"), # Index of the sample
27
- "code": datasets.Value("string"), # The full text of the function
28
- "label": datasets.Value("string"), # The id of problem that the source code solves
29
- }
30
-
31
- _SUPERVISED_KEYS = ["label"]
32
-
33
- SPLIT_RANGES = {"train": (1, 65), "valid": (65, 81), "test": (81, 195)}
34
-
35
- def _generate_examples(self, files, split_name):
36
- cont = 0
37
- for path, f in files:
38
- # path are in the format ProgramData/{index}/{filename}
39
- label = int(path.split("/")[1])
40
- if self.SPLIT_RANGES[split_name][0] <= label <= self.SPLIT_RANGES[split_name][1]:
41
- js = {}
42
- js["label"] = str(label)
43
- js["id"] = cont
44
- js["code"] = f.read().decode("latin-1")
45
- yield cont, js
46
- cont += 1
47
-
48
-
49
- CLASS_MAPPING = {
50
- "CodeXGlueCcCloneDetectionPoj104": CodeXGlueCcCloneDetectionPoj104Impl,
51
- }
52
-
53
-
54
- class CodeXGlueCcCloneDetectionPoj104(datasets.GeneratorBasedBuilder):
55
- BUILDER_CONFIG_CLASS = datasets.BuilderConfig
56
- BUILDER_CONFIGS = [
57
- datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
58
- ]
59
-
60
- def _info(self):
61
- name = self.config.name
62
- info = DEFINITIONS[name]
63
- if info["class_name"] in CLASS_MAPPING:
64
- self.child = CLASS_MAPPING[info["class_name"]](info)
65
- else:
66
- raise RuntimeError(f"Unknown python class for dataset configuration {name}")
67
- ret = self.child._info()
68
- return ret
69
-
70
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
71
- name = self.config.name
72
- info = DEFINITIONS[name]
73
- archive = dl_manager.download(info["raw_url"] + "/programs.tar.gz")
74
- return [
75
- datasets.SplitGenerator(
76
- name=datasets.Split.TRAIN,
77
- # These kwargs will be passed to _generate_examples
78
- gen_kwargs={"files": dl_manager.iter_archive(archive), "split_name": "train"},
79
- ),
80
- datasets.SplitGenerator(
81
- name=datasets.Split.VALIDATION,
82
- # These kwargs will be passed to _generate_examples
83
- gen_kwargs={"files": dl_manager.iter_archive(archive), "split_name": "valid"},
84
- ),
85
- datasets.SplitGenerator(
86
- name=datasets.Split.TEST,
87
- # These kwargs will be passed to _generate_examples
88
- gen_kwargs={"files": dl_manager.iter_archive(archive), "split_name": "test"},
89
- ),
90
- ]
91
-
92
- def _generate_examples(self, files, split_name):
93
- return self.child._generate_examples(files, split_name)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
common.py DELETED
@@ -1,75 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
-
6
- # Citation, taken from https://github.com/microsoft/CodeXGLUE
7
- _DEFAULT_CITATION = """@article{CodeXGLUE,
8
- title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
9
- year={2020},}"""
10
-
11
-
12
- class Child:
13
- _DESCRIPTION = None
14
- _FEATURES = None
15
- _CITATION = None
16
- SPLITS = {"train": datasets.Split.TRAIN}
17
- _SUPERVISED_KEYS = None
18
-
19
- def __init__(self, info):
20
- self.info = info
21
-
22
- def homepage(self):
23
- return self.info["project_url"]
24
-
25
- def _info(self):
26
- # This is the description that will appear on the datasets page.
27
- return datasets.DatasetInfo(
28
- description=self.info["description"] + "\n\n" + self._DESCRIPTION,
29
- features=datasets.Features(self._FEATURES),
30
- homepage=self.homepage(),
31
- citation=self._CITATION or _DEFAULT_CITATION,
32
- supervised_keys=self._SUPERVISED_KEYS,
33
- )
34
-
35
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
36
- SPLITS = self.SPLITS
37
- _URL = self.info["raw_url"]
38
- urls_to_download = {}
39
- for split in SPLITS:
40
- if split not in urls_to_download:
41
- urls_to_download[split] = {}
42
-
43
- for key, url in self.generate_urls(split):
44
- if not url.startswith("http"):
45
- url = _URL + "/" + url
46
- urls_to_download[split][key] = url
47
-
48
- downloaded_files = {}
49
- for k, v in urls_to_download.items():
50
- downloaded_files[k] = dl_manager.download_and_extract(v)
51
-
52
- return [
53
- datasets.SplitGenerator(
54
- name=SPLITS[k],
55
- gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
56
- )
57
- for k in SPLITS
58
- ]
59
-
60
- def check_empty(self, entries):
61
- all_empty = all([v == "" for v in entries.values()])
62
- all_non_empty = all([v != "" for v in entries.values()])
63
-
64
- if not all_non_empty and not all_empty:
65
- raise RuntimeError("Parallel data files should have the same number of lines.")
66
-
67
- return all_empty
68
-
69
-
70
- class TrainValidTestChild(Child):
71
- SPLITS = {
72
- "train": datasets.Split.TRAIN,
73
- "valid": datasets.Split.VALIDATION,
74
- "test": datasets.Split.TEST,
75
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "CodeXGLUE Clone-detection-POJ-104 dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104\n\nGiven a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.\nWe use POJ-104 dataset on this task.", "citation": "@inproceedings{mou2016convolutional,\ntitle={Convolutional neural networks over tree structures for programming language processing},\nauthor={Mou, Lili and Li, Ge and Zhang, Lu and Wang, Tao and Jin, Zhi},\nbooktitle={Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},\npages={1287--1293},\nyear={2016}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "code": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "label", "output": ""}, "task_templates": null, "builder_name": "code_x_glue_cc_clone_detection_poj104", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 18878686, "num_examples": 32000, "dataset_name": "code_x_glue_cc_clone_detection_poj104"}, "validation": {"name": "validation", "num_bytes": 5765303, "num_examples": 8000, "dataset_name": "code_x_glue_cc_clone_detection_poj104"}, "test": {"name": "test", "num_bytes": 6852864, "num_examples": 12000, "dataset_name": "code_x_glue_cc_clone_detection_poj104"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/Clone-detection-POJ-104/dataset/programs.tar.gz": {"num_bytes": 8658581, "checksum": "c0b8ef3ee9c9159c882dc9337cb46da0e612a28e24852a83f8a1cd68c838f390"}}, "download_size": 8658581, "post_processing_size": null, "dataset_size": 31496853, "size_in_bytes": 40155434}}
 
 
default/code_x_glue_cc_clone_detection_poj104-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93065b3019c0d4e82fc1765f819e9dfdb70e0edd9ba394ade9650f0f67242509
3
+ size 2853417
default/code_x_glue_cc_clone_detection_poj104-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fc953731a77d9a0c3ed58e9fabc41843b24f22a391d284f0fd717b86d3a0818
3
+ size 8031542
default/code_x_glue_cc_clone_detection_poj104-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63b0e0d702b7b93c1afe59f53c7a5545b36e334cd07046e739a4b5bbf60def20
3
+ size 2463775
generated_definitions.py DELETED
@@ -1,12 +0,0 @@
1
- DEFINITIONS = {
2
- "default": {
3
- "class_name": "CodeXGlueCcCloneDetectionPoj104",
4
- "dataset_type": "Code-Code",
5
- "description": "CodeXGLUE Clone-detection-POJ-104 dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104",
6
- "dir_name": "Clone-detection-POJ-104",
7
- "name": "default",
8
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104",
9
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/Clone-detection-POJ-104/dataset",
10
- "sizes": {"test": 12000, "train": 32000, "validation": 8000},
11
- }
12
- }