Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
JSONSchemaBench / README.md
Saibo-creator's picture
Upload dataset
d3dbc27 verified
|
raw
history blame
8.26 kB
metadata
dataset_info:
  features:
    - name: json_schema
      dtype: string
    - name: unique_id
      dtype: string
  splits:
    - name: WashingtonPost
      num_bytes: 2710006
      num_examples: 125
    - name: Snowplow
      num_bytes: 1615377
      num_examples: 408
    - name: Kubernetes
      num_bytes: 25511596
      num_examples: 1087
    - name: Handwritten
      num_bytes: 301079
      num_examples: 147
    - name: Synthesized
      num_bytes: 180571
      num_examples: 413
    - name: Github_trivial
      num_bytes: 778720
      num_examples: 457
    - name: Github_easy
      num_bytes: 1961230
      num_examples: 1943
    - name: Github_medium
      num_bytes: 7973692
      num_examples: 1976
    - name: Github_hard
      num_bytes: 20187683
      num_examples: 1240
    - name: Github_ultra
      num_bytes: 12243180
      num_examples: 166
    - name: JsonSchemaStore
      num_bytes: 22166586
      num_examples: 492
    - name: Glaiveai2K
      num_bytes: 1443150
      num_examples: 1709
  download_size: 19161655
  dataset_size: 97072870
configs:
  - config_name: default
    data_files:
      - split: WashingtonPost
        path: data/WashingtonPost-*
      - split: Snowplow
        path: data/Snowplow-*
      - split: Kubernetes
        path: data/Kubernetes-*
      - split: Handwritten
        path: data/Handwritten-*
      - split: Synthesized
        path: data/Synthesized-*
      - split: Github_trivial
        path: data/Github_trivial-*
      - split: Github_easy
        path: data/Github_easy-*
      - split: Github_medium
        path: data/Github_medium-*
      - split: Github_hard
        path: data/Github_hard-*
      - split: Github_ultra
        path: data/Github_ultra-*
      - split: JsonSchemaStore
        path: data/JsonSchemaStore-*
      - split: Glaiveai2K
        path: data/Glaiveai2K-*

JSON Schema Witness Generation Dataset

Overview

This dataset is a collection of JSON schema witnesses generated from various sources, aimed at facilitating the study and evaluation of JSON schema transformations, validations, and other related operations. The dataset consists of several subsets, each derived from a different domain or source, and is structured to allow for comprehensive testing and benchmarking of tools and algorithms dealing with JSON schemas.

Dataset Structure

The dataset is organized into the following subsets:

1. Washington Post (wp)

  • Samples: 125
  • Description: This subset contains JSON schema witnesses from the Washington Post dataset. The schemas have been processed and formatted for easy integration and analysis.
  • Original Directory: Washington Post

2. Snowplow (dg)

  • Samples: 420
  • Description: The Snowplow subset includes 420 JSON schema witnesses, each representing a structured data format used within the Snowplow event data pipeline.
  • Original Directory: Snowplow

3. Kubernetes (sat)

  • Samples: 1087
  • Description: This subset contains JSON schema witnesses from the Kubernetes ecosystem. It is particularly useful for analyzing configuration and deployment schemas.
  • Original Directory: Kubernetes

4. Github (sat)

  • Samples: 6335
  • Description: The largest subset in the dataset, containing 6335 samples from GitHub repositories. These schemas are ideal for studying open-source project configurations and automation.
  • Original Directory: Github

5. Handwritten (sat)

  • Samples: 197
  • Description: The Handwritten subset consists of 197 manually crafted JSON schema witnesses, designed to test edge cases and complex schema configurations.
  • Original Directory: Handwritten

6. Synthesized (sat)

  • Samples: 450
  • Description: This subset is synthesized from various sources to cover a broad range of JSON schema use cases, particularly focusing on containment and validation.
  • Original Directory: Synthesized

7. Github Trivial

  • Samples: 570
  • Description: Contains GitHub JSON schemas categorized as "trivial" with sizes less than 10. This split is designed to handle extremely simple and minimal schemas.

8. Github Easy

  • Samples: 2035
  • Description: Contains GitHub JSON schemas categorized as "easy," with sizes between 10 and 30. These schemas offer moderate complexity and are useful for testing general tools.

9. Github Medium

  • Samples: 2121
  • Description: Contains GitHub JSON schemas categorized as "medium," with sizes between 30 and 100. These schemas reflect intermediate complexity.

10. Github Hard

  • Samples: 1405
  • Description: Contains GitHub JSON schemas categorized as "hard," with sizes between 100 and 500. These are complex schemas that are challenging to process and validate.

11. Github Ultra

  • Samples: 204
  • Description: Contains GitHub JSON schemas categorized as "ultra," with sizes greater than 500. These schemas represent the most complex and large-scale structures, ideal for stress testing tools.

12. JSON Schema Store (JsonSchemaStore)

  • Samples: 650
  • Description: This new subset includes 650 JSON schemas derived from the JSON Schema Store, comprising both training and validation data. The schemas cover a wide range of use cases and domains, making them a valuable addition to the dataset for evaluating JSON schema-related tasks such as validation, transformation, and optimization.
  • Original Source: dataunitylab/json-schema-store

Dataset Validation and Filtering

To ensure the quality and validity of the JSON schemas in each split, a filtering process was applied to remove invalid schemas. The following statistics summarize the filtering results for each subset:

  • Washington Post:

    • Dropped: 0/125 invalid schemas
    • Kept: 125/125 valid schemas
  • Snowplow:

    • Dropped: 6/420 invalid schemas
    • Kept: 414/420 valid schemas
  • Kubernetes:

    • Dropped: 0/1087 invalid schemas
    • Kept: 1087/1087 valid schemas
  • Github:

    • Dropped: 380/6335 invalid schemas
    • Kept: 5955/6335 valid schemas
  • Handwritten:

    • Dropped: 50/197 invalid schemas
    • Kept: 147/197 valid schemas
  • Synthesized:

    • Dropped: 35/450 invalid schemas
    • Kept: 415/450 valid schemas
  • Github Trivial:

    • Dropped: 37/570 invalid schemas
    • Kept: 533/570 valid schemas
  • Github Easy:

    • Dropped: 59/2035 invalid schemas
    • Kept: 1976/2035 valid schemas
  • Github Medium:

    • Dropped: 112/2121 invalid schemas
    • Kept: 2009/2121 valid schemas
  • Github Hard:

    • Dropped: 139/1405 invalid schemas
    • Kept: 1266/1405 valid schemas
  • Github Ultra:

    • Dropped: 33/204 invalid schemas
    • Kept: 171/204 valid schemas
  • JSON Schema Store (JsonSchemaStore):

    • Dropped: 59/650 invalid schemas
    • Kept: 591/650 valid schemas

Citation

The datasets listed above are featured in the paper "Witness Generation for JSON Schema." We extend our gratitude to the authors for their valuable contributions. For more detailed information about the datasets and the methods used, please refer to the paper: Witness Generation for JSON Schema.

License

This dataset is provided under the MIT License. Please ensure that you comply with the license terms when using or distributing this dataset.

Acknowledgements

We would like to thank the contributors and maintainers of the JSON schema projects and the open-source community for their invaluable work and support.