Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,27 +1,75 @@
|
|
1 |
-
---
|
2 |
-
dataset_info:
|
3 |
-
config_name: triplet
|
4 |
-
features:
|
5 |
-
- name: query
|
6 |
-
dtype: string
|
7 |
-
- name: positive
|
8 |
-
dtype: string
|
9 |
-
- name: negative
|
10 |
-
dtype: string
|
11 |
-
splits:
|
12 |
-
- name: train
|
13 |
-
num_bytes: 12581563.792427007
|
14 |
-
num_examples: 42076
|
15 |
-
- name: test
|
16 |
-
num_bytes: 3149278.207572993
|
17 |
-
num_examples: 10532
|
18 |
-
download_size: 1254810
|
19 |
-
dataset_size: 15730842
|
20 |
-
configs:
|
21 |
-
- config_name: triplet
|
22 |
-
data_files:
|
23 |
-
- split: train
|
24 |
-
path: triplet/train-*
|
25 |
-
- split: test
|
26 |
-
path: triplet/test-*
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
dataset_info:
|
3 |
+
config_name: triplet
|
4 |
+
features:
|
5 |
+
- name: query
|
6 |
+
dtype: string
|
7 |
+
- name: positive
|
8 |
+
dtype: string
|
9 |
+
- name: negative
|
10 |
+
dtype: string
|
11 |
+
splits:
|
12 |
+
- name: train
|
13 |
+
num_bytes: 12581563.792427007
|
14 |
+
num_examples: 42076
|
15 |
+
- name: test
|
16 |
+
num_bytes: 3149278.207572993
|
17 |
+
num_examples: 10532
|
18 |
+
download_size: 1254810
|
19 |
+
dataset_size: 15730842
|
20 |
+
configs:
|
21 |
+
- config_name: triplet
|
22 |
+
data_files:
|
23 |
+
- split: train
|
24 |
+
path: triplet/train-*
|
25 |
+
- split: test
|
26 |
+
path: triplet/test-*
|
27 |
+
task_categories:
|
28 |
+
- sentence-similarity
|
29 |
+
---
|
30 |
+
|
31 |
+
This dataset is the triplet subset of https://huggingface.co/datasets/sentence-transformers/sql-questions with a train and test split.
|
32 |
+
|
33 |
+
The test split can be passed to [`TripletEvaluator`](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#tripletevaluator).
|
34 |
+
|
35 |
+
The train and test spilts don't have any queries in common.
|
36 |
+
|
37 |
+
<details>
|
38 |
+
<summary>Here's the full script used to generate this dataset</summary>
|
39 |
+
|
40 |
+
```python
|
41 |
+
import os
|
42 |
+
|
43 |
+
import datasets
|
44 |
+
from sklearn.model_selection import train_test_split
|
45 |
+
|
46 |
+
|
47 |
+
dataset = datasets.load_dataset(
|
48 |
+
"sentence-transformers/sql-questions", "triplet", split="train"
|
49 |
+
)
|
50 |
+
|
51 |
+
queries_unique = list({record["query"]: None for record in dataset})
|
52 |
+
# Use a dict for deterministic (insertion) order
|
53 |
+
len(queries_unique)
|
54 |
+
|
55 |
+
queries_tr, queries_te = train_test_split(
|
56 |
+
queries_unique, test_size=0.2, random_state=42
|
57 |
+
)
|
58 |
+
|
59 |
+
queries_tr = set(queries_tr)
|
60 |
+
queries_te = set(queries_te)
|
61 |
+
train_dataset = dataset.filter(lambda record: record["query"] in queries_tr)
|
62 |
+
test_dataset = dataset.filter(lambda record: record["query"] in queries_te)
|
63 |
+
|
64 |
+
assert not set(train_dataset["query"]) & set(test_dataset["query"])
|
65 |
+
assert len(train_dataset) + len(test_dataset) == len(dataset)
|
66 |
+
|
67 |
+
|
68 |
+
dataset_dict = datasets.DatasetDict({"train": train_dataset, "test": test_dataset})
|
69 |
+
dataset_dict.push_to_hub(
|
70 |
+
"aladar/sql-questions", config_name="triplet", token=os.environ["HF_TOKEN_CREATE"]
|
71 |
+
)
|
72 |
+
|
73 |
+
```
|
74 |
+
|
75 |
+
</details>
|