skgouda commited on
Commit
9d57b4c
·
verified ·
1 Parent(s): 27daf62

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +236 -3
README.md CHANGED
@@ -1,3 +1,236 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: task_id
5
+ dtype: string
6
+ - name: language
7
+ dtype: string
8
+ - name: prompt
9
+ dtype: string
10
+ - name: test
11
+ dtype: string
12
+ - name: entry_point
13
+ dtype: string
14
+ splits:
15
+ - name: multilingual-humaneval_python
16
+ num_bytes: 165716
17
+ num_examples: 164
18
+ download_size: 67983
19
+ dataset_size: 165716
20
+ license: apache-2.0
21
+ task_categories:
22
+ - text-generation
23
+ tags:
24
+ - mxeval
25
+ - code-generation
26
+ - mbxp
27
+ - multi-humaneval
28
+ - mathqax
29
+ pretty_name: mxeval
30
+ language:
31
+ - en
32
+ ---
33
+ # MxEval
34
+ **M**ultilingual E**x**ecution **Eval**uation
35
+
36
+ ## Table of Contents
37
+ - [MxEval](#MxEval)
38
+ - [Table of Contents](#table-of-contents)
39
+ - [Dataset Description](#dataset-description)
40
+ - [Dataset Summary](#dataset-summary)
41
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
42
+ - [Languages](#languages)
43
+ - [Dataset Structure](#dataset-structure)
44
+ - [Data Instances](#data-instances)
45
+ - [Data Fields](#data-fields)
46
+ - [Data Splits](#data-splits)
47
+ - [Dataset Creation](#dataset-creation)
48
+ - [Curation Rationale](#curation-rationale)
49
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
50
+ - [Social Impact of Dataset](#social-impact-of-dataset)
51
+ - [Executional Correctness](#execution)
52
+ - [Execution Example](#execution-example)
53
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
54
+ - [Additional Information](#additional-information)
55
+ - [Dataset Curators](#dataset-curators)
56
+ - [Licensing Information](#licensing-information)
57
+ - [Citation Information](#citation-information)
58
+ - [Contributions](#contributions)
59
+
60
+ ## Dataset Description
61
+
62
+ - **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
63
+ - **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
64
+
65
+ ### Dataset Summary
66
+
67
+ This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
68
+ namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
69
+ <br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
70
+
71
+
72
+ ### Supported Tasks and Leaderboards
73
+ * [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
74
+ * [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
75
+ * [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
76
+
77
+ ### Languages
78
+ The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
79
+
80
+
81
+ ## Dataset Structure
82
+ To lookup currently supported datasets
83
+ ```python
84
+ get_dataset_config_names("mxeval/mxeval")
85
+ ['mathqa-x', 'mbxp', 'multi-humaneval']
86
+ ```
87
+ To load a specific dataset and language
88
+ ```python
89
+ from datasets import load_dataset
90
+ load_dataset("mxeval/mxeval", "mbxp", split="python")
91
+ Dataset({
92
+ features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'description', 'canonical_solution'],
93
+ num_rows: 974
94
+ })
95
+ ```
96
+
97
+ ### Data Instances
98
+
99
+ An example of a dataset instance:
100
+
101
+ ```python
102
+ {
103
+ "task_id": "MBSCP/6",
104
+ "language": "scala",
105
+ "prompt": "object Main extends App {\n /**\n * You are an expert Scala programmer, and here is your task.\n * * Write a Scala function to check whether the two numbers differ at one bit position only or not.\n *\n * >>> differAtOneBitPos(13, 9)\n * true\n * >>> differAtOneBitPos(15, 8)\n * false\n * >>> differAtOneBitPos(2, 4)\n * false\n */\n def differAtOneBitPos(a : Int, b : Int) : Boolean = {\n",
106
+ "test": "\n\n var arg00 : Int = 13\n var arg01 : Int = 9\n var x0 : Boolean = differAtOneBitPos(arg00, arg01)\n var v0 : Boolean = true\n assert(x0 == v0, \"Exception -- test case 0 did not pass. x0 = \" + x0)\n\n var arg10 : Int = 15\n var arg11 : Int = 8\n var x1 : Boolean = differAtOneBitPos(arg10, arg11)\n var v1 : Boolean = false\n assert(x1 == v1, \"Exception -- test case 1 did not pass. x1 = \" + x1)\n\n var arg20 : Int = 2\n var arg21 : Int = 4\n var x2 : Boolean = differAtOneBitPos(arg20, arg21)\n var v2 : Boolean = false\n assert(x2 == v2, \"Exception -- test case 2 did not pass. x2 = \" + x2)\n\n\n}\n",
107
+ "entry_point": "differAtOneBitPos",
108
+ "description": "Write a Scala function to check whether the two numbers differ at one bit position only or not."
109
+ }
110
+ ```
111
+
112
+ ### Data Fields
113
+
114
+ - `task_id`: identifier for the data sample
115
+ - `prompt`: input for the model containing function header and docstrings
116
+ - `canonical_solution`: solution for the problem in the `prompt`
117
+ - `description`: task description
118
+ - `test`: contains function to test generated code for correctness
119
+ - `entry_point`: entry point for test
120
+ - `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
121
+
122
+
123
+ ### Data Splits
124
+
125
+ - HumanXEval
126
+ - Python
127
+ - Java
128
+ - JavaScript
129
+ - Csharp
130
+ - CPP
131
+ - Go
132
+ - Kotlin
133
+ - PHP
134
+ - Perl
135
+ - Ruby
136
+ - Swift
137
+ - Scala
138
+ - MBXP
139
+ - Python
140
+ - Java
141
+ - JavaScript
142
+ - TypeScript
143
+ - Csharp
144
+ - CPP
145
+ - Go
146
+ - Kotlin
147
+ - PHP
148
+ - Perl
149
+ - Ruby
150
+ - Swift
151
+ - Scala
152
+ - MathQA
153
+ - Python
154
+ - Java
155
+ - JavaScript
156
+
157
+
158
+ ## Dataset Creation
159
+
160
+ ### Curation Rationale
161
+
162
+ Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
163
+
164
+ ### Personal and Sensitive Information
165
+
166
+ None.
167
+
168
+ ### Social Impact of Dataset
169
+ With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
170
+
171
+ ### Dataset Curators
172
+ AWS AI Labs
173
+
174
+ ## Execution
175
+
176
+ ### Execution Example
177
+ Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
178
+
179
+ ```python
180
+ >>> from datasets import load_dataset
181
+ >>> from mxeval.execution import check_correctness
182
+ >>> mbxp_python = load_dataset("mxeval/mxeval", "mbxp", split="python")
183
+ >>> example_problem = mbxp_python[0]
184
+ >>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
185
+ {'task_id': 'MBPP/1', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 10.582208633422852}
186
+ ```
187
+ ### Considerations for Using the Data
188
+ Make sure to sandbox the execution environment since generated code samples can be harmful.
189
+
190
+
191
+ ### Licensing Information
192
+
193
+ [LICENSE](https://huggingface.co/datasets/mxeval/mxeval/blob/main/LICENSE) <br>
194
+ [THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/mxeval/blob/main/THIRD_PARTY_LICENSES)
195
+
196
+ # Citation Information
197
+ ```
198
+ @article{mbxp_athiwaratkun2022,
199
+ title = {Multi-lingual Evaluation of Code Generation Models},
200
+ author = {Athiwaratkun, Ben and
201
+ Gouda, Sanjay Krishna and
202
+ Wang, Zijian and
203
+ Li, Xiaopeng and
204
+ Tian, Yuchen and
205
+ Tan, Ming
206
+ and Ahmad, Wasi Uddin and
207
+ Wang, Shiqi and
208
+ Sun, Qing and
209
+ Shang, Mingyue and
210
+ Gonugondla, Sujan Kumar and
211
+ Ding, Hantian and
212
+ Kumar, Varun and
213
+ Fulton, Nathan and
214
+ Farahani, Arash and
215
+ Jain, Siddhartha and
216
+ Giaquinto, Robert and
217
+ Qian, Haifeng and
218
+ Ramanathan, Murali Krishna and
219
+ Nallapati, Ramesh and
220
+ Ray, Baishakhi and
221
+ Bhatia, Parminder and
222
+ Sengupta, Sudipta and
223
+ Roth, Dan and
224
+ Xiang, Bing},
225
+ doi = {10.48550/ARXIV.2210.14868},
226
+ url = {https://arxiv.org/abs/2210.14868},
227
+ keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
228
+ publisher = {arXiv},
229
+ year = {2022},
230
+ copyright = {Creative Commons Attribution 4.0 International}
231
+ }
232
+ ```
233
+
234
+ # Contributions
235
+
236
+ [skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)