Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,54 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# TREC-TOT 2024: TOMT-KIS-TRIPLETS Dataset
|
6 |
+
|
7 |
+
## Description
|
8 |
+
|
9 |
+
**TOMT-KIS-TRIPLETS** is a refined subset of the broader [TOMT-KIS](https://huggingface.co/datasets/webis/tip-of-my-tongue-known-item-search/blob/main/README.md) dataset, curated for advanced applications. Given that Wikipedia, IMDb, and YouTube are the most commonly appearing domains in `links_on_answer_path`, we focus specifically on these sources for higher relevancy and usability. By leveraging a Wikipedia dump (more details here!) and IMDb, our objective is to develop a dataset that contains training and especially a labeled test set with direct links to relevant Wikipedia respectively IMDb articles, making it ideal for supervised learning tasks.
|
10 |
+
|
11 |
+
|
12 |
+
## Dataset Structure
|
13 |
+
|
14 |
+
see also [TOMT-KIS](https://huggingface.co/datasets/webis/tip-of-my-tongue-known-item-search/blob/main/README.md)
|
15 |
+
|
16 |
+
### Data Instances
|
17 |
+
|
18 |
+
### Data Fields
|
19 |
+
|
20 |
+
The TOMT-KIS-TRIPLETS dataset includes the following key columns:
|
21 |
+
|
22 |
+
- `qid`: Query ID from Reddit.
|
23 |
+
- `query`: The full Reddit question, combining the Title with the Content, separated by '\n\n'. Note that the selected Reddit answer is excluded from this field.
|
24 |
+
- `docno_pos`: Document ID of the positive document within the Wikipedia dump, associated with the query.
|
25 |
+
- `url_wikipedia_pos`: Wikipedia URL of the positive document linked to the query.
|
26 |
+
- `positive`: The content of the Wikipedia article serving as the positive document for the query.
|
27 |
+
- `negative`: The content of the Wikipedia article serving as the negative document for the query.
|
28 |
+
- `negative_trec_tot_2024_id`: Document ID for the negative document within the [TREC-ToT 2024](https://trec-tot.github.io) corpus.
|
29 |
+
- `docno_pos`: Position of the positive document in the the TREC-ToT 2024 corpus.
|
30 |
+
|
31 |
+
### Data processing steps
|
32 |
+
|
33 |
+
We filter the TOMT-KIS dataset for questions with a `chosen_answer` that includes links to Wikipedia, IMDb, or YouTube. Since the `chosen_answer` links aren’t explicitly provided, we first extract URLs from the chosen answers. Then, we filter these URLs to include only those with Wikipedia as the domain. Canonicalization of URLs is performed to ensure consistency and accurate matching across sources.
|
34 |
+
|
35 |
+
The filtered dataset includes:
|
36 |
+
|
37 |
+
- **13,089** Reddit questions answered with a Wikipedia link, successfully matched to the Wikipedia dump.
|
38 |
+
- **13,136** Reddit questions answered with IMDb links, of which **12,916** are unique to IMDb (non-overlapping with Wikipedia links).
|
39 |
+
|
40 |
+
| Domain | with matched URL | without matched URL | Total |
|
41 |
+
| --------- | ----------------------------------------------- | ------------------- | ----------------------------------------------- |
|
42 |
+
| Wikipedia | 13,089 | 651 | 13,740 |
|
43 |
+
| IMDB | 13,136<br>(12,916 without overlap to Wikipedia) | 3,830 | 16,746 |
|
44 |
+
| YouTube | / | 12,994 | 12,994 |
|
45 |
+
| **Total** | 26,005 | 17,475 | **43,480**<br><br>**32,553** without duplicates |
|
46 |
+
|
47 |
+
In order to ensure training data without bias, we remove all entries from TOMT-KIS-Triplet that either
|
48 |
+
a) contain a `url_wikipedia_pos` of the positive document of an entry that occurs or
|
49 |
+
b) contain a `negative_trec_tot_2024_id` of the negative document of an entry
|
50 |
+
that occurs in on of the qrel files from TREC-ToT 2024.
|
51 |
+
|
52 |
+
- `negative_trec_tot_2024_id` can directly be used to filter out the corresponding entries. This way, 240 entries where filtered out of our dataset.
|
53 |
+
- As TREC-ToT 2024 only contains `wikidata_id`s of each document, we first need to use [SPARQL](https://query.wikidata.org) in order to retrieve the corresponding `wikidata_url`s for each `wikidata_id` in each `qrels`s file before filtering. This way, 58 entries where filtered out of our dataset.
|
54 |
+
- In total, TOMT-KIS-Triplet contains 32553 entries.
|