tos_pp_dataset / README.md
chenghao's picture
Upload dataset
9b543db verified
metadata
license: mit
dataset_info:
  - config_name: 100_tos
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 5240826
        num_examples: 92
    download_size: 2497746
    dataset_size: 5240826
  - config_name: 10_tos
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 1920213
        num_examples: 20
    download_size: 718890
    dataset_size: 1920213
  - config_name: 142_tos
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 12968483
        num_examples: 140
    download_size: 4884205
    dataset_size: 12968483
  - config_name: cuad
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 1180620
        num_examples: 28
    download_size: 484787
    dataset_size: 1180620
  - config_name: memnet_tos
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 5607746
        num_examples: 100
    download_size: 2012157
    dataset_size: 5607746
  - config_name: multilingual_unfair_clause
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 22775210
        num_examples: 200
    download_size: 9557263
    dataset_size: 22775210
  - config_name: polisis
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 3137858
        num_examples: 4570
      - name: validation
        num_bytes: 802441
        num_examples: 1153
      - name: test
        num_bytes: 967678
        num_examples: 1446
    download_size: 1827549
    dataset_size: 4907977
  - config_name: privacy_glue__piextract
    features:
      - name: document
        dtype: string
    splits:
      - name: validation
        num_bytes: 7106934
        num_examples: 4116
      - name: train
        num_bytes: 18497078
        num_examples: 12140
    download_size: 5707087
    dataset_size: 25604012
  - config_name: privacy_glue__policy_detection
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 13657226
        num_examples: 1301
    download_size: 6937382
    dataset_size: 13657226
  - config_name: privacy_glue__policy_ie
    features:
      - name: type_i
        dtype: string
      - name: type_ii
        dtype: string
    splits:
      - name: test
        num_bytes: 645788
        num_examples: 6
      - name: train
        num_bytes: 2707213
        num_examples: 25
    download_size: 1097051
    dataset_size: 3353001
  - config_name: privacy_glue__policy_qa
    features:
      - name: document
        dtype: string
    splits:
      - name: test
        num_bytes: 1353787
        num_examples: 20
      - name: dev
        num_bytes: 1230490
        num_examples: 20
      - name: train
        num_bytes: 5441319
        num_examples: 75
    download_size: 2418472
    dataset_size: 8025596
  - config_name: privacy_glue__polisis
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 3073878
        num_examples: 4570
      - name: validation
        num_bytes: 786299
        num_examples: 1153
      - name: test
        num_bytes: 947434
        num_examples: 1446
    download_size: 1816140
    dataset_size: 4807611
  - config_name: privacy_glue__privacy_qa
    features:
      - name: document
        dtype: string
    splits:
      - name: train
        num_bytes: 12099109
        num_examples: 27
      - name: test
        num_bytes: 4468753
        num_examples: 8
    download_size: 1221943
    dataset_size: 16567862
configs:
  - config_name: 100_tos
    data_files:
      - split: train
        path: 100_tos/train-*
  - config_name: 10_tos
    data_files:
      - split: train
        path: 10_tos/train-*
  - config_name: 142_tos
    data_files:
      - split: train
        path: 142_tos/train-*
  - config_name: cuad
    data_files:
      - split: train
        path: cuad/train-*
  - config_name: memnet_tos
    data_files:
      - split: train
        path: memnet_tos/train-*
  - config_name: multilingual_unfair_clause
    data_files:
      - split: train
        path: multilingual_unfair_clause/train-*
  - config_name: polisis
    data_files:
      - split: train
        path: privacy_glue/polisis/train-*
      - split: validation
        path: privacy_glue/polisis/validation-*
      - split: test
        path: privacy_glue/polisis/test-*
  - config_name: privacy_glue__piextract
    data_files:
      - split: validation
        path: privacy_glue/piextract/validation-*
      - split: train
        path: privacy_glue/piextract/train-*
  - config_name: privacy_glue__policy_detection
    data_files:
      - split: train
        path: privacy_glue/policy_detection/train-*
  - config_name: privacy_glue__policy_ie
    data_files:
      - split: test
        path: privacy_glue/policy_ie/test-*
      - split: train
        path: privacy_glue/policy_ie/train-*
  - config_name: privacy_glue__policy_qa
    data_files:
      - split: test
        path: privacy_glue/policy_qa/test-*
      - split: dev
        path: privacy_glue/policy_qa/dev-*
      - split: train
        path: privacy_glue/policy_qa/train-*
  - config_name: privacy_glue__polisis
    data_files:
      - split: train
        path: privacy_glue/polisis/train-*
      - split: validation
        path: privacy_glue/polisis/validation-*
      - split: test
        path: privacy_glue/polisis/test-*
  - config_name: privacy_glue__privacy_qa
    data_files:
      - split: train
        path: privacy_glue/privacy_qa/train-*
      - split: test
        path: privacy_glue/privacy_qa/test-*

A collection of Terms of Service or Privacy Policy datasets

Annotated datasets

CUAD

Specifically, the 28 service agreements from CUAD, which are licensed under CC BY 4.0 (subset: cuad).

Code
import datasets
from tos_datasets.proto import DocumentQA

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "cuad")

print(DocumentQA.model_validate_json(ds["document"][0]))

100 ToS

From Annotated 100 ToS, CC BY 4.0 (subset: 100_tos).

Code
import datasets
from tos_datasets.proto import DocumentEUConsumerLawAnnotation

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "100_tos")

print(DocumentEUConsumerLawAnnotation.model_validate_json(ds["document"][0]))

Multilingual Unfair Clause

From CLAUDETTE/Multilingual Unfair Clause, CC BY 4.0 (subset: multilingual_unfair_clause).

It was built from CLAUDETTE/25 Terms of Service in English, Italian, German, and Polish (100 documents in total) from A Corpus for Multilingual Analysis of Online Terms of Service.

Code
import datasets
from tos_datasets.proto import DocumentClassification

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "multilingual_unfair_clause")

print(DocumentClassification.model_validate_json(ds["document"][0]))

Memnet ToS

From 100 Terms of Service in English from Detecting and explaining unfairness in consumer contracts through memory networks, MIT (subset: memnet_tos).

Code
import datasets
from tos_datasets.proto import DocumentClassification

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "memnet_tos")

print(DocumentClassification.model_validate_json(ds["document"][0]))

142 ToS

From 142 Terms of Service in English divided according to market sector from Assessing the Cross-Market Generalization Capability of the CLAUDETTE System, Unknown (subset: 142_tos). This should also includes 50 Terms of Service in English from "CLAUDETTE: an Automated Detector of Potentially Unfair Clauses in Online Terms of Service".

Code
import datasets
from tos_datasets.proto import DocumentClassification

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "142_tos")

print(DocumentClassification.model_validate_json(ds["document"][0]))

10 ToS/PP

From 5 Terms of Service and 5 Privacy Policies in English and German (10 documents in total) from Cross-lingual Annotation Projection in Legal Texts, GNU GPL 3.0 (subset: 10_tos)

Code
import datasets
from tos_datasets.proto import DocumentClassification

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "10_tos")

print(DocumentClassification.model_validate_json(ds["document"][0]))

PolicyQA

This dataset seems to have some annotation issues where unanswerable questions are still answered with SQuAD-v1 format instead of the v2 format.

From PolicyQA, MIT (subset: privacy_glue/policy_qa).

Code
import datasets
from tos_datasets.proto import DocumentQA

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_qa")

print(DocumentQA.model_validate_json(ds["train"]["document"][0]))

PolicyIE

From PolicyIE, MIT (subset: privacy_glue/policy_ie).

Code
import datasets
from tos_datasets.proto import DocumentSequenceClassification, DocumentEvent

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_ie")

print(DocumentSequenceClassification.model_validate_json(ds["train"]["type_i"][0]))
print(DocumentEvent.model_validate_json(ds["train"]["type_ii"][0]))

Policy Detection

From [policy-detection-data](https://github.com/infsys-lab/policy-detection-data, GPL 3.0 (subset: privacy_glue/policy_detection).

Code
import datasets
from tos_datasets.proto import DocumentClassification

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/policy_detection")

print(DocumentClassification.model_validate_json(ds["train"]["document"][0]))

Polisis

From Polisis, Unknown (subset: privacy_glue/polisis).

Code
import datasets
from tos_datasets.proto import DocumentClassification

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/polisis")

print(DocumentClassification.model_validate_json(ds["test"]["document"][0]))

PrivacyQA

From PrivacyQA, MIT (subset: privacy_qa).

Code
import datasets
from tos_datasets.proto import DocumentClassification

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/privacy_qa")

print(DocumentClassification.model_validate_json(ds["test"]["document"][0]))

Piextract

From Piextract, Unknown (subset: privacy_glue/piextract).

Code
import datasets
from tos_datasets.proto import DocumentSequenceClassification

ds = datasets.load_dataset("chenghao/tos_pp_dataset", "privacy_glue/piextract")

print(DocumentSequenceClassification.model_validate_json(ds["train"]["document"][0]))

WIP