Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,16 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
dataset_info:
|
3 |
config_name: pair
|
4 |
features:
|
@@ -18,3 +30,20 @@ configs:
|
|
18 |
- split: train
|
19 |
path: pair/train-*
|
20 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
multilinguality:
|
5 |
+
- monolingual
|
6 |
+
size_categories:
|
7 |
+
- 10M<n<100M
|
8 |
+
task_categories:
|
9 |
+
- feature-extraction
|
10 |
+
- sentence-similarity
|
11 |
+
pretty_name: WikiAnswers Duplicate Questions
|
12 |
+
tags:
|
13 |
+
- sentence-transformers
|
14 |
dataset_info:
|
15 |
config_name: pair
|
16 |
features:
|
|
|
30 |
- split: train
|
31 |
path: pair/train-*
|
32 |
---
|
33 |
+
|
34 |
+
# Dataset Card for WikiAnswers Duplicate Questions
|
35 |
+
|
36 |
+
This dataset contains duplicate questions from the [WikiAnswers Corpus](https://github.com/afader/oqa#wikianswers-corpus), formatted to be easily used with Sentence Transformers to train embedding models.
|
37 |
+
|
38 |
+
## Dataset Subsets
|
39 |
+
|
40 |
+
### `pair` subset
|
41 |
+
|
42 |
+
* Columns: "anchor", "positive"
|
43 |
+
* Column types: `str`, `str`
|
44 |
+
* Examples:
|
45 |
+
```python
|
46 |
+
|
47 |
+
```
|
48 |
+
* Collection strategy: Reading the WikiAnswers dataset from [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data), which has lists of duplicate questions. I've considered all adjacent questions as a positive pair, plus the last and first caption. So, e.g. 5 duplicate questions results in 5 duplicate pairs.
|
49 |
+
* Deduplified: No
|