Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -300,6 +300,39 @@ task_categories:
|
|
| 300 |
|
| 301 |
JSONSchemaBench is a benchmark of **real-world JSON schemas** designed to evaluate **structured output generation** for Large Language Models (LLMs). It contains approximately **10,000 JSON schemas**, capturing diverse constraints and complexities.
|
| 302 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 303 |
## 📌 Dataset Overview
|
| 304 |
- **Purpose:** Evaluate the **efficiency** and **coverage** of structured output generation.
|
| 305 |
- **Sources:** GitHub, Kubernetes, API specifications, curated collections.
|
|
|
|
| 300 |
|
| 301 |
JSONSchemaBench is a benchmark of **real-world JSON schemas** designed to evaluate **structured output generation** for Large Language Models (LLMs). It contains approximately **10,000 JSON schemas**, capturing diverse constraints and complexities.
|
| 302 |
|
| 303 |
+
## 📢 Important Update (March 10th, 2025)
|
| 304 |
+
|
| 305 |
+
We have restructured the dataset to include train/val/test splits. If you downloaded the dataset before this date, you might encounter errors like `KeyError: 'Github_easy'`.
|
| 306 |
+
|
| 307 |
+
To fix this issue, please follow one of the options below:
|
| 308 |
+
|
| 309 |
+
1. Update How Subsets Are Accessed:
|
| 310 |
+
If you previously used:
|
| 311 |
+
|
| 312 |
+
```python
|
| 313 |
+
from datasets import load_dataset, concatenate_datasets, DatasetDict, Dataset
|
| 314 |
+
|
| 315 |
+
subset: DatasetDict = load_dataset("epfl-dlab/JSONSchemaBench")
|
| 316 |
+
subset["Github_easy"]
|
| 317 |
+
```
|
| 318 |
+
You can update it to:
|
| 319 |
+
|
| 320 |
+
```python
|
| 321 |
+
from datasets import load_dataset, concatenate_datasets, DatasetDict, Dataset
|
| 322 |
+
|
| 323 |
+
subset: DatasetDict = load_dataset("epfl-dlab/JSONSchemaBench", name="Github_easy")
|
| 324 |
+
subset: Dataset = concatenate_datasets([subset["train"], subset["val"], subset["test"]])
|
| 325 |
+
```
|
| 326 |
+
|
| 327 |
+
2. Load the Dataset in the Old Structure:
|
| 328 |
+
If you need the previous structure, you can use a specific revision:
|
| 329 |
+
|
| 330 |
+
```python
|
| 331 |
+
dataset = load_dataset("epfl-dlab/JSONSchemaBench", revision="e2ee5fdba65657c60d3a24b321172eb7141f8d73")
|
| 332 |
+
```
|
| 333 |
+
|
| 334 |
+
We apologize for the inconvenience and appreciate your understanding! 😊
|
| 335 |
+
|
| 336 |
## 📌 Dataset Overview
|
| 337 |
- **Purpose:** Evaluate the **efficiency** and **coverage** of structured output generation.
|
| 338 |
- **Sources:** GitHub, Kubernetes, API specifications, curated collections.
|