Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ language:
|
|
4 |
multilinguality:
|
5 |
- monolingual
|
6 |
size_categories:
|
7 |
-
-
|
8 |
task_categories:
|
9 |
- feature-extraction
|
10 |
- sentence-similarity
|
@@ -46,4 +46,4 @@ This dataset contains duplicate questions from the [WikiAnswers Corpus](https://
|
|
46 |
|
47 |
```
|
48 |
* Collection strategy: Reading the WikiAnswers dataset from [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data), which has lists of duplicate questions. I've considered all adjacent questions as a positive pair, plus the last and first caption. So, e.g. 5 duplicate questions results in 5 duplicate pairs.
|
49 |
-
* Deduplified: No
|
|
|
4 |
multilinguality:
|
5 |
- monolingual
|
6 |
size_categories:
|
7 |
+
- 100M<n<1B
|
8 |
task_categories:
|
9 |
- feature-extraction
|
10 |
- sentence-similarity
|
|
|
46 |
|
47 |
```
|
48 |
* Collection strategy: Reading the WikiAnswers dataset from [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data), which has lists of duplicate questions. I've considered all adjacent questions as a positive pair, plus the last and first caption. So, e.g. 5 duplicate questions results in 5 duplicate pairs.
|
49 |
+
* Deduplified: No
|