CO-IR commited on
Commit
a67a119
·
verified ·
1 Parent(s): 2b9b9f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -24
README.md CHANGED
@@ -1,39 +1,69 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
3
  ---
4
  # ICPC World FinalsDataset
5
 
6
  ## Dataset Description
7
- The ICPC World Finals serves as a benchmark for code generation, encompassing 146 problems from the years 2011 to 2023. This dataset can be employed to assess the proficiency of language models in generating code from natural language specifications.
 
 
 
 
 
 
 
 
8
 
9
  ## Dataset Structure
10
 
11
  ```python
12
  from datasets import load_dataset
13
- load_dataset("HumanLastCodeExam/icpc-world-finals")
 
 
 
14
  ```
15
 
 
 
 
 
 
 
 
 
 
 
16
  ## Data Fields
17
 
18
- - `name`: The name of the contest. Note that names could agree between different sources.
19
- - `description`: A natural language description of a programming problem.
20
- - `public_tests`: Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired `input` and `output` that can be used to test potential solutions. They are therefore acceptable inputs to a model.
21
- - `private_tests`: Private tests are not visible before submitting a solution, so should not be made available as inputs to a model.
22
- - `generated_tests`: Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions.
23
- - `source`: The original source of the problem, with possible values including `UNKNOWN_SOURCE` (0),`CODECHEF` (1), `CODEFORCES` (2), `HACKEREARTH` (3), `CODEJAM` (4), `ATCODER` (5) and `AIZU` (6).
24
- - `difficulty`: A representation of the difficulty of the problem with possible values including `UNKNOWN_DIFFICULTY` (0), `EASY` (1), `MEDIUM` (2), `HARD` (3), `HARDER` (4), `HARDEST` (5), `EXTERNAL` (6), `A` (7), `B` (8), `C` (9), `D` (10), `E` (11), `F` (12), `G` (13), `H` (14), `I` (15), `J` (16), `K` (17), `L` (18), `M` (19), `N` (20), `O` (21), `P` (22), `Q` (23), `R` (24), `S` (25), `T` (26), `U` (27) and `V` (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, `cf_rating` is a more reliable measure of difficulty when available.
25
- - `solutions`: Correct solutions to the problem. Contrast with `incorrect_solutions` below.
26
- - `incorrect_solutions`: Incorrect solutions.
27
- - `cf_contest_id`: The Contest ID. Note that Contest ID is not monotonic with respect to time.
28
- - `cf_index`: Problem index, e.g. `"A"` or `"B"` or `"C"`.
29
- - `cf_points`: Points for the problem, e.g. `1000.0`
30
- - `cf_rating`: Problem rating (difficulty), e.g. `1100`
31
- - `cf_tags`: Problem tags, e.g. `['greedy', 'math']`
32
- - `is_description_translated`: Whether the problem was translated to English.
33
- - `untranslated_description`: The untranslated description is only available for translated problems.
34
- - `time_limit`: The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, `seconds` and `nanos`. This field is None if not defined.
35
- - `memory_limit_bytes`: The memory limit constraint to use when executing solutions.
36
- - `input_file`: Most problems use stdin for IO. Some problems expect specific files to be used instead.
37
- - `output_file`: Most problems use stdout for IO. Some problems expect specific files to be used instead.
38
-
39
- All tests are represented as a paired `input` and `output` that can be used to test potential solutions and all solutions comprise a `language`, with possible values including `UNKNOWN_LANGUAGE` (0), `PYTHON` (1) (solutions written in PYTHON2), `CPP` (2), `PYTHON3` (3) and `JAVA` (4), and a `solution` string written in that `language`. The fields preceded with `cf_` denote extra meta-data for Codeforces problems.
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ tags:
4
+ - programming
5
+ - code-generation
6
+ - competitive-programming
7
+ - benchmark
8
  ---
9
  # ICPC World FinalsDataset
10
 
11
  ## Dataset Description
12
+ The ICPC World Finals Dataset serves as a challenging benchmark for code generation, encompassing 146 problems from the International Collegiate Programming Contest (ICPC) World Finals spanning from 2011 to 2023. The ICPC World Finals represents one of the most prestigious and difficult competitive programming contests globally, making this dataset particularly valuable for assessing the advanced problem-solving and code generation capabilities of language models.
13
+
14
+
15
+ ## Dataset Statistics
16
+ - **Total Problems**: 146 problems
17
+ - **Time Span**: 2011-2023
18
+ - **Average Problem Complexity**: High (competitive programming world finals level)
19
+ - **Languages**: Problem statements in English, solutions expected in Python
20
+
21
 
22
  ## Dataset Structure
23
 
24
  ```python
25
  from datasets import load_dataset
26
+ ds = load_dataset("HumanLastCodeExam/icpc-world-finals")
27
+ # Basic exploration
28
+ print(f"Dataset size: {len(ds['train'])} problems")
29
+ print(f"Sample problem title: {ds['train'][0]['question_title']}")
30
  ```
31
 
32
+ ## Data Fields
33
+ ```
34
+ "question_title": "Ship Traffic",
35
+ "platform": "ICPC_world_final_2015",
36
+ "question_id": "2015_I",
37
+ "question_content": "## Problem Description\n\nFerries crossing the Strait of Gibraltar from Morocco to xxx```",
38
+ "test_cases": [{"input":"xxx","output":"xxxx"}],
39
+ "prompt":"You are an expert Python programmer.\n\n- You will be given a problem statement,xxx"+"## Problem Description\n\nFerries crossing the Strait of Gibraltar from Morocco to xxx",
40
+ "instruct":You are an expert Python programmer.\n\n- You will be given a problem statement,xxx".
41
+ ```
42
  ## Data Fields
43
 
44
+ - `question_title`: The title of the programming problem.
45
+ - `platform`: The competitive programming platform.
46
+ - `question_id`: A unique identifier assigned to the problem, facilitating its reference and retrieval.
47
+ - `question_content`: A comprehensive description outlining the requirements and specifications of the problem, detailing the task to be accomplished.
48
+ - `test_cases`: A collection of test cases, typically including sample inputs and outputs that serve as benchmarks for validating solutions.
49
+ - `prompt`: Combine the content of instruct with question_content. Utilize this field to generate the code.
50
+ - `instruct`: The provided code generates instruct, but you may also use your own instruct.
51
+
52
+ ## Paper
53
+
54
+ ```
55
+ @misc{li2025humanityscodeexamadvanced,
56
+ title={Humanity's Last Code Exam: Can Advanced LLMs Conquer Human's Hardest Code Competition?},
57
+ author={Xiangyang Li and Xiaopeng Li and Kuicai Dong and Quanhu Zhang and Rongju Ruan and Xinyi Dai and Xiaoshuang Liu and Shengchun Xu and Yasheng Wang and Ruiming Tang},
58
+ year={2025},
59
+ eprint={2506.12713},
60
+ archivePrefix={arXiv},
61
+ primaryClass={cs.SE},
62
+ url={https://arxiv.org/abs/2506.12713},
63
+ }
64
+ ```
65
+ ## GitHub Repository
66
+ For more information, examples, and evaluation scripts:
67
+ ```
68
+ https://github.com/Humanity-s-Last-Code-Exam/HLCE
69
+ ```