sql-questions / README.md
aladar's picture
Update README.md
c57232d verified
|
raw
history blame
1.99 kB
metadata
dataset_info:
  config_name: triplet
  features:
    - name: query
      dtype: string
    - name: positive
      dtype: string
    - name: negative
      dtype: string
  splits:
    - name: train
      num_bytes: 12581563.792427007
      num_examples: 42076
    - name: test
      num_bytes: 3149278.207572993
      num_examples: 10532
  download_size: 1254810
  dataset_size: 15730842
configs:
  - config_name: triplet
    data_files:
      - split: train
        path: triplet/train-*
      - split: test
        path: triplet/test-*
task_categories:
  - sentence-similarity

This dataset is the triplet subset of https://huggingface.co/datasets/sentence-transformers/sql-questions with a train and test split.

The test split can be passed to TripletEvaluator.

The train and test spilts don't have any queries in common.

Here's the full script used to generate this dataset
import os

import datasets
from sklearn.model_selection import train_test_split


dataset = datasets.load_dataset(
    "sentence-transformers/sql-questions", "triplet", split="train"
)

queries_unique = list({record["query"]: None for record in dataset})
# Use a dict for deterministic (insertion) order
len(queries_unique)

queries_tr, queries_te = train_test_split(
    queries_unique, test_size=0.2, random_state=42
)

queries_tr = set(queries_tr)
queries_te = set(queries_te)
train_dataset = dataset.filter(lambda record: record["query"] in queries_tr)
test_dataset = dataset.filter(lambda record: record["query"] in queries_te)

assert not set(train_dataset["query"]) & set(test_dataset["query"])
assert len(train_dataset) + len(test_dataset) == len(dataset)


dataset_dict = datasets.DatasetDict({"train": train_dataset, "test": test_dataset})
dataset_dict.push_to_hub(
    "aladar/sql-questions", config_name="triplet", token=os.environ["HF_TOKEN_CREATE"]
)