Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
ryanse commited on
Commit
8e720e4
·
verified ·
1 Parent(s): 51741f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +257 -41
README.md CHANGED
@@ -1,41 +1,257 @@
1
- ---
2
- license: unknown
3
- dataset_info:
4
- - config_name: file_content
5
- features:
6
- - name: hash
7
- dtype: string
8
- - name: content
9
- dtype: string
10
- splits:
11
- - name: test
12
- num_bytes: 1309611058
13
- num_examples: 56774
14
- download_size: 445913258
15
- dataset_size: 1309611058
16
- - config_name: problem_files
17
- features:
18
- - name: instance_id
19
- dtype: string
20
- - name: files
21
- list:
22
- - name: content_hash
23
- dtype: string
24
- - name: file_path
25
- dtype: string
26
- splits:
27
- - name: test
28
- num_bytes: 92318557
29
- num_examples: 500
30
- download_size: 23353903
31
- dataset_size: 92318557
32
- configs:
33
- - config_name: file_content
34
- data_files:
35
- - split: test
36
- path: file_content/test-*
37
- - config_name: problem_files
38
- data_files:
39
- - split: test
40
- path: problem_files/test-*
41
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: unknown
3
+ dataset_info:
4
+ - config_name: file_content
5
+ features:
6
+ - name: hash
7
+ dtype: string
8
+ - name: content
9
+ dtype: string
10
+ splits:
11
+ - name: test
12
+ num_bytes: 1309611058
13
+ num_examples: 56774
14
+ download_size: 445913258
15
+ dataset_size: 1309611058
16
+ - config_name: problem_files
17
+ features:
18
+ - name: instance_id
19
+ dtype: string
20
+ - name: files
21
+ list:
22
+ - name: content_hash
23
+ dtype: string
24
+ - name: file_path
25
+ dtype: string
26
+ splits:
27
+ - name: test
28
+ num_bytes: 92318557
29
+ num_examples: 500
30
+ download_size: 23353903
31
+ dataset_size: 92318557
32
+ configs:
33
+ - config_name: file_content
34
+ data_files:
35
+ - split: test
36
+ path: file_content/test-*
37
+ - config_name: problem_files
38
+ data_files:
39
+ - split: test
40
+ path: problem_files/test-*
41
+ ---
42
+
43
+ # SWE-Bench Verified Codebase Content Dataset
44
+
45
+ ## Introduction
46
+
47
+ [SWE-bench](https://www.swebench.com/) is a popular benchmark that measures how well systems can solve real-world software engineering problems. To solve SWE-bench problems, systems need to interact with large codebases that have long commit histories. Interacting with these codebases in an agent loop using git can be slow and take up large amounts of storage space.
48
+
49
+ This dataset provides the complete Python codebase snapshots for all problems in the [SWE-bench Verified](https://openai.com/index/introducing-swe-bench-verified/) dataset. For each problem instance, it includes all Python files present in the repository at the commit hash specified by the original SWE-bench Dataset.
50
+
51
+ ## How to Use
52
+
53
+ The dataset consists of two main components:
54
+ 1. `file_content`: Contains the actual content of all unique files
55
+ 2. `problem_files`: Maps each problem instance to its relevant files
56
+
57
+ Here's an example of how to load and use the dataset:
58
+
59
+ ```python
60
+ from datasets import load_dataset
61
+
62
+ REPO_CONTENT_DATASET_NAME = "ScalingIntelligence/swe-bench-verified-codebase-content"
63
+
64
+ # Load both components of the dataset
65
+ file_content = load_dataset(
66
+ REPO_CONTENT_DATASET_NAME, "file_content", split="test"
67
+ )
68
+ hash_to_content = {row['hash']: row['content'] for row in file_content}
69
+ problem_files = load_dataset(
70
+ REPO_CONTENT_DATASET_NAME, "problem_files", split="test"
71
+ )
72
+
73
+ # Example: Get files for a specific problem instance
74
+ problem = problem_files[0]
75
+
76
+ print(problem['instance_id']) # 'astropy__astropy-12907'
77
+
78
+ # Get the content of each file for the first 10 files
79
+ for file_info in problem["files"][:10]:
80
+ file_path = file_info["file_path"]
81
+ content = hash_to_content[file_info["content_hash"]]
82
+
83
+ print(f"File: {file_path}")
84
+ print("Content:", content[:100], "...") # Print first 100 chars
85
+ ```
86
+
87
+ ## Dataset construction
88
+
89
+ The dataset is generated using a Python script that:
90
+ 1. Clones all repositories from the SWE-bench Verified dataset
91
+ 2. Checks out the specific commit for each problem
92
+ 3. Collects all Python files from the repository
93
+ 4. Deduplicates file content using SHA-256 hashing
94
+ 5. Creates two dataset components: one for file content and one for problem-to-file mappings
95
+
96
+ Here's the full code used to generate the dataset:
97
+
98
+ ```python
99
+ import argparse
100
+ from dataclasses import dataclass, asdict
101
+ import hashlib
102
+ from pathlib import Path
103
+ import subprocess
104
+ from typing import Dict, List
105
+
106
+ import datasets
107
+ from datasets import Dataset
108
+
109
+ import tqdm
110
+
111
+ class _SWEBenchProblem:
112
+ """A problem in the SWE-bench Verified dataset."""
113
+ def __init__(self, row):
114
+ self._row = row
115
+
116
+ @property
117
+ def repo(self) -> str:
118
+ return self._row["repo"]
119
+
120
+ @property
121
+ def base_commit(self) -> str:
122
+ return self._row["base_commit"]
123
+
124
+ @property
125
+ def instance_id(self) -> str:
126
+ return self._row["instance_id"]
127
+
128
+
129
+ VALID_EXTENSIONS = {"py"}
130
+
131
+ @dataclass
132
+ class FileInCodebase:
133
+ file_path: str
134
+ content_hash: str
135
+
136
+
137
+ @dataclass
138
+ class CodebaseContent:
139
+ """The content of the codebase for a specific SWE-Bench problem."""
140
+ instance_id: str
141
+ files: List[FileInCodebase]
142
+
143
+
144
+ def hash_file_content(file_content: str) -> str:
145
+ return hashlib.sha256(file_content.encode()).hexdigest()
146
+
147
+ def clone_repos(problems: list[_SWEBenchProblem], repos_dir: Path):
148
+ """Clones all the repos needed for SWE-bench Verified."""
149
+ repos_dir.mkdir(exist_ok=False, parents=True)
150
+
151
+ if len(list(repos_dir.iterdir())):
152
+ raise ValueError("Repos dir should be empty")
153
+
154
+ repos = {problem.repo for problem in problems}
155
+ for repo in tqdm.tqdm(repos, desc="Cloning repos"):
156
+ output = subprocess.run(
157
+ ["git", "clone", f"https://github.com/{repo}.git"],
158
+ cwd=repos_dir,
159
+ capture_output=True,
160
+ )
161
+ assert output.returncode == 0
162
+
163
+
164
+ def get_codebase_content(
165
+ problem: _SWEBenchProblem, repos_dir: Path, hash_to_content: Dict[str, str]
166
+ ) -> CodebaseContent:
167
+ """Gets the content of the codebase for a specific problem.
168
+
169
+ Updates the hash_to_content map in place with hashes of the content of each file.
170
+ """
171
+ repo = problem.repo.split("/")[-1]
172
+ repo_path = repos_dir / repo
173
+
174
+ subprocess.run(
175
+ ["git", "checkout", problem.base_commit], cwd=repo_path, capture_output=True
176
+ )
177
+
178
+ contexts = []
179
+
180
+ for file_path in repo_path.rglob("*"):
181
+ if not file_path.is_file:
182
+ continue
183
+
184
+ if file_path.suffix[1:] not in VALID_EXTENSIONS: # [1:] excludes the '.'
185
+ continue
186
+
187
+ try:
188
+ content = file_path.read_text()
189
+ except UnicodeDecodeError:
190
+ # Ignore these files.
191
+ continue
192
+
193
+ content_hash = hash_file_content(content)
194
+ if content_hash not in hash_to_content:
195
+ hash_to_content[content_hash] = content
196
+
197
+ contexts.append(
198
+ FileInCodebase(
199
+ file_path=str(file_path.relative_to(repo_path)),
200
+ content_hash=content_hash,
201
+ )
202
+ )
203
+
204
+ return CodebaseContent(instance_id=problem.instance_id, files=contexts)
205
+
206
+ def main():
207
+ parser = argparse.ArgumentParser()
208
+ parser.add_argument(
209
+ "--repo_directory",
210
+ type=Path,
211
+ default=Path("/scr/ryanehrlich/swebench_verified_repos"),
212
+ )
213
+ parser.add_argument(
214
+ "--output_dataset_name",
215
+ type=str,
216
+ default="ScalingIntelligence/swe-bench-verified-codebase-content",
217
+ )
218
+
219
+ args = parser.parse_args()
220
+
221
+ dataset = datasets.load_dataset("princeton-nlp/SWE-bench_Verified", split="test")
222
+ problems = [_SWEBenchProblem(row) for row in dataset]
223
+
224
+ clone_repos(problems, args.repo_directory)
225
+ hash_to_content = {}
226
+ codebase_content_per_problem = [
227
+ get_codebase_content(problem, args.repo_directory, hash_to_content)
228
+ for problem in tqdm.tqdm(problems, desc="Fetching codebase content")
229
+ ]
230
+
231
+ hash_to_content_in_hf_form = [
232
+ {
233
+ "hash": hash_,
234
+ "content": content,
235
+ }
236
+ for (hash_, content) in hash_to_content.items()
237
+ ]
238
+
239
+ codebase_content_in_hf_form = [
240
+ asdict(problem) for problem in codebase_content_per_problem
241
+ ]
242
+
243
+ file_content_dataset = Dataset.from_list(hash_to_content_in_hf_form, split="test")
244
+ problems_dataset = Dataset.from_list(codebase_content_in_hf_form, split="test")
245
+
246
+ file_content_dataset.push_to_hub(
247
+ args.output_dataset_name, "file_content", private=True, max_shard_size="256MB"
248
+ )
249
+ problems_dataset.push_to_hub(
250
+ args.output_dataset_name, "problem_files", private=True, max_shard_size="256MB"
251
+ )
252
+
253
+
254
+ if __name__ == "__main__":
255
+ main()
256
+ ```
257
+