Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
JSONSchemaBench / README.md
Saibo-creator's picture
Upload dataset
69d52d1 verified
|
raw
history blame
7.12 kB
metadata
dataset_info:
  features:
    - name: filename
      dtype: string
    - name: json_schema
      dtype: string
  splits:
    - name: WashingtonPost
      num_bytes: 2703506
      num_examples: 125
    - name: Snowplow
      num_bytes: 1625722.8
      num_examples: 414
    - name: Kubernetes
      num_bytes: 25517031
      num_examples: 1087
    - name: Github
      num_bytes: 50985405.258089975
      num_examples: 5955
    - name: Handwritten
      num_bytes: 258663.28934010153
      num_examples: 147
    - name: Synthesized
      num_bytes: 184321.7888888889
      num_examples: 415
    - name: full
      num_bytes: 80026399.76805201
      num_examples: 8143
    - name: Github_trivial
      num_bytes: 2778039.0140350875
      num_examples: 533
    - name: Github_easy
      num_bytes: 2262055.8584766584
      num_examples: 1976
    - name: Github_medium
      num_bytes: 8410022.567656765
      num_examples: 2009
    - name: Github_hard
      num_bytes: 20925894.21352313
      num_examples: 1266
    - name: Github_ultra
      num_bytes: 14112596.470588235
      num_examples: 171
    - name: JsonSchemaStore
      num_bytes: 26516819.792307694
      num_examples: 591
  download_size: 47982672
  dataset_size: 236306477.82095852
configs:
  - config_name: default
    data_files:
      - split: WashingtonPost
        path: data/WashingtonPost-*
      - split: Snowplow
        path: data/Snowplow-*
      - split: Kubernetes
        path: data/Kubernetes-*
      - split: Github
        path: data/Github-*
      - split: Handwritten
        path: data/Handwritten-*
      - split: Synthesized
        path: data/Synthesized-*
      - split: full
        path: data/full-*
      - split: Github_trivial
        path: data/Github_trivial-*
      - split: Github_easy
        path: data/Github_easy-*
      - split: Github_medium
        path: data/Github_medium-*
      - split: Github_hard
        path: data/Github_hard-*
      - split: Github_ultra
        path: data/Github_ultra-*
      - split: JsonSchemaStore
        path: data/JsonSchemaStore-*

JSON Schema Witness Generation Dataset

Overview

This dataset is a collection of JSON schema witnesses generated from various sources, aimed at facilitating the study and evaluation of JSON schema transformations, validations, and other related operations. The dataset consists of several subsets, each derived from a different domain or source, and is structured to allow for comprehensive testing and benchmarking of tools and algorithms dealing with JSON schemas.

Dataset Structure

The dataset is organized into the following subsets:

1. Washington Post (wp)

  • Samples: 125
  • Description: This subset contains JSON schema witnesses from the Washington Post dataset. The schemas have been processed and formatted for easy integration and analysis.
  • Original Directory: Washington Post

2. Snowplow (dg)

  • Samples: 420
  • Description: The Snowplow subset includes 420 JSON schema witnesses, each representing a structured data format used within the Snowplow event data pipeline.
  • Original Directory: Snowplow

3. Kubernetes (sat)

  • Samples: 1087
  • Description: This subset contains JSON schema witnesses from the Kubernetes ecosystem. It is particularly useful for analyzing configuration and deployment schemas.
  • Original Directory: Kubernetes

4. Github (sat)

  • Samples: 6335
  • Description: The largest subset in the dataset, containing 6335 samples from GitHub repositories. These schemas are ideal for studying open-source project configurations and automation.
  • Original Directory: Github

5. Handwritten (sat)

  • Samples: 197
  • Description: The Handwritten subset consists of 197 manually crafted JSON schema witnesses, designed to test edge cases and complex schema configurations.
  • Original Directory: Handwritten

6. Synthesized (sat)

  • Samples: 450
  • Description: This subset is synthesized from various sources to cover a broad range of JSON schema use cases, particularly focusing on containment and validation.
  • Original Directory: Synthesized

7. Github Trivial

  • Samples: 570
  • Description: Contains GitHub JSON schemas categorized as "trivial" with sizes less than 10. This split is designed to handle extremely simple and minimal schemas.

8. Github Easy

  • Samples: 2035
  • Description: Contains GitHub JSON schemas categorized as "easy," with sizes between 10 and 30. These schemas offer moderate complexity and are useful for testing general tools.

9. Github Medium

  • Samples: 2121
  • Description: Contains GitHub JSON schemas categorized as "medium," with sizes between 30 and 100. These schemas reflect intermediate complexity.

10. Github Hard

  • Samples: 1405
  • Description: Contains GitHub JSON schemas categorized as "hard," with sizes between 100 and 500. These are complex schemas that are challenging to process and validate.

11. Github Ultra

  • Samples: 204
  • Description: Contains GitHub JSON schemas categorized as "ultra," with sizes greater than 500. These schemas represent the most complex and large-scale structures, ideal for stress testing tools.

12. JSON Schema Store (JsonSchemaStore)

  • Samples: 650
  • Description: This new subset includes 650 JSON schemas derived from the JSON Schema Store, comprising both training and validation data. The schemas cover a wide range of use cases and domains, making them a valuable addition to the dataset for evaluating JSON schema-related tasks such as validation, transformation, and optimization.
  • Original Source: dataunitylab/json-schema-store

Citation

The datasets listed above are featured in the paper "Witness Generation for JSON Schema." We extend our gratitude to the authors for their valuable contributions. For more detailed information about the datasets and the methods used, please refer to the paper: Witness Generation for JSON Schema.

License

This dataset is provided under the MIT License. Please ensure that you comply with the license terms when using or distributing this dataset.

Acknowledgements

We would like to thank the contributors and maintainers of the JSON schema projects and the open-source community for their invaluable work and support.