akatsuki1125 commited on
Commit
1700bcc
·
verified ·
1 Parent(s): 50515c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -0
README.md CHANGED
@@ -483,4 +483,28 @@ configs:
483
  ---
484
  # Dataset Card for "JMultiP-E"
485
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
486
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
483
  ---
484
  # Dataset Card for "JMultiP-E"
485
 
486
+ ## Dataset Description
487
+
488
+ - **Repository:** https://github.com/tohoku-nlp/JMultiPL-E
489
+ <!-- - **Paper:** https://ieeexplore.ieee.org/abstract/document/10103177 -->
490
+ <!-- - **Point of Contact:** [email protected], [email protected], [email protected] -->
491
+
492
+ ## Dataset Summary
493
+
494
+ JMultiPL-E is a dataset for evaluating large language models for code
495
+ generation that supports 17 programming languages. It takes the OpenAI
496
+ HumanEval and uses little compilers to translate them to other languages. It is easy to add support for new languages
497
+ and benchmarks.
498
+
499
+ The dataset is divided into several configurations named *SRCDATA-LANG*, where
500
+ *SRCDATA* is either "humaneval" and *LANG* is one of the supported
501
+ languages. We use the canonical file extension for each language to identify
502
+ the language, e.g., "cpp" for C++, "lua" for Lua, "clj" for Clojure, and so on.
503
+
504
+ ## Using JMultiPL-E
505
+
506
+ - JMultiPL-E is part of the [BigCode Code Generation LM Harness]. This
507
+ is the easiest way to use JMultiPL-E.
508
+
509
+
510
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)