Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
SWE-PolyBench / README.md
mshihabr's picture
Update README.md
8c71ca2 verified
metadata
license: cc-by-nc-4.0
tags:
  - coding
  - agents

SWE-PolyBench

SWE-PolyBench is a multi language repo level software engineering benchmark. Currently it includes 4 languages: Python, Java, Javascript, and Typescript. The number of instances in each language is:

Javascript: 1017

Typescript: 729

Python: 199

Java: 165

Datasets

There are total two datasets available under SWE-PolyBench. AmazonScience/SWE-PolyBench is the full dataset and AmazonScience/SWE-PolyBench_500 is the stratified sampled dataset with 500 instances.

Leaderboard

We evaluated several open source coding agents/models on this dataset and report them in our leaderboard.

Submit

To submit your predictions on this dataset, please follow this README

Languages

The text of the dataset is primarily English.

Dataset Structure

An example row from the dataset includes the following columns:

instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
problem_statement: (str) - The issue title and body.
F2P: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
P2P: (str) - A json list of strings that represent tests that should pass before and after the PR application.
language: (str) - The programming language
Dockerfile: (str) - The instance level dockerfile
test_command: (str) - The test command used to get F2P and P2P
task_category: (str) - The problem classification (Bug Fix, Refactoring, Feature)
is_no_nodes: (bool) - Helpful info for evaluating retrieval metrics
is_func_only: (bool) - Helpful info for evaluating retrieval metrics
is_class_only: (bool) - Helpful info for evaluating retrieval metrics
is_mixed: (bool) - Helpful info for evaluating retrieval metrics
num_func_changes: (int) - Helpful info for evaluating retrieval metrics
num_class_changes: (int) - Helpful info for evaluating retrieval metrics
num_nodes: (int) - Helpful info for evaluating retrieval metrics
is_single_func: (bool) - Helpful info for evaluating retrieval metrics
is_single_class: (bool) - Helpful info for evaluating retrieval metrics
modified_nodes: (bool) - Helpful info for evaluating retrieval metrics

Citation

If you find our work useful please cite:

@misc{rashid2025swepolybenchmultilanguagebenchmarkrepository,
      title={SWE-PolyBench: A multi-language benchmark for repository level evaluation of coding agents}, 
      author={Muhammad Shihab Rashid and Christian Bock and Yuan Zhuang and Alexander Buccholz and Tim Esler and Simon Valentin and Luca Franceschi and Martin Wistuba and Prabhu Teja Sivaprasad and Woo Jung Kim and Anoop Deoras and Giovanni Zappella and Laurent Callot},
      year={2025},
      eprint={2504.08703},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2504.08703}, 
}