Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
michaeldinzinger commited on
Commit
c9a577a
·
verified ·
1 Parent(s): d2672e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -1
README.md CHANGED
@@ -1152,4 +1152,199 @@ configs:
1152
  path: zho/queries-train.jsonl
1153
  - split: test
1154
  path: zho/queries-test.jsonl
1155
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1152
  path: zho/queries-train.jsonl
1153
  - split: test
1154
  path: zho/queries-test.jsonl
1155
+ ---
1156
+ <h1 align="center">WebFAQ Retrieval Dataset</h1>
1157
+ <h4 align="center">
1158
+ <p>
1159
+ <a href=#overview>Overview</a> |
1160
+ <a href=#details>Details</a> |
1161
+ <a href=#structure>Structure</a> |
1162
+ <a href=#examples>Examples</a> |
1163
+ <a href=#considerations>Considerations</a> |
1164
+ <a href=#license>License</a> |
1165
+ <a href=#citation>Citation</a> |
1166
+ <a href=#contact>Contact</a> |
1167
+ <a href=#acknowledgement>Acknowledgement</a>
1168
+ <p>
1169
+ </h4>
1170
+
1171
+ ## Overview
1172
+
1173
+ The **WebFAQ Retrieval Dataset** is a carefully **filtered and curated subset** of the broader [WebFAQ Q&A Dataset](https://huggingface.co/datasets/anonymous202501/webfaq).
1174
+ It is **purpose-built for Information Retrieval (IR)** tasks, such as **training and evaluating** dense or sparse retrieval models in **multiple languages**.
1175
+
1176
+ Each of the **20 largest** languages from the WebFAQ corpus has been **thoroughly cleaned** and **refined** to ensure an unblurred notion of relevance between a query (question) and its corresponding document (answer). In particular, we applied:
1177
+
1178
+ - **Deduplication** of near-identical questions,
1179
+ - **Semantic consistency checks** for question-answer alignment,
1180
+ - **Train/Test splits** for retrieval experiments.
1181
+
1182
+ ## Details
1183
+
1184
+ ### Languages
1185
+
1186
+ The **WebFAQ Retrieval Dataset** covers **20 high-resource languages** from the original WebFAQ corpus, each comprising tens of thousands to hundreds of thousands of QA pairs after our rigorous filtering steps:
1187
+
1188
+ | Language | # QA pairs |
1189
+ |----------|-----------:|
1190
+ | ara | 143k |
1191
+ | dan | 138k |
1192
+ | deu | 891k |
1193
+ | eng | 5.28M |
1194
+ | fas | 227k |
1195
+ | fra | 570k |
1196
+ | hin | 96.6k |
1197
+ | ind | 96.6k |
1198
+ | ita | 209k |
1199
+ | jpn | 280k |
1200
+ | kor | 79.1k |
1201
+ | nld | 349k |
1202
+ | pol | 179k |
1203
+ | por | 186k |
1204
+ | rus | 346k |
1205
+ | spa | 558k |
1206
+ | swe | 144k |
1207
+ | tur | 110k |
1208
+ | vie | 105k |
1209
+ | zho | 125k |
1210
+
1211
+ ## Structure
1212
+
1213
+ Unlike the raw Q&A dataset, **WebFAQ Retrieval** provides explicit **train/test splits** for each of the 20 languages. The general structure for each language is:
1214
+
1215
+ - **Corpus**: A set of unique documents (answers) with IDs and text fields.
1216
+ - **Queries**: A set of question strings, each tied to a document ID for relevance.
1217
+ - **Qrels**: Relevance labels, mapping each question to its relevant document (corresponding answer).
1218
+
1219
+ ### Folder Layout (e.g., for eng)
1220
+
1221
+ ```
1222
+ eng/
1223
+ ├── corpus.jsonl # all unique documents (answers)
1224
+ ├── queries.jsonl # all queries for train/test
1225
+ ├── train.jsonl # relevance annotations for train
1226
+ └── test.jsonl # relevance annotations for test
1227
+ ```
1228
+
1229
+ ## Examples
1230
+
1231
+ Below is a small snippet showing how to load English train/test sets with [🤗 Datasets](https://github.com/huggingface/datasets):
1232
+
1233
+ ```python
1234
+ import json
1235
+ from datasets import load_dataset
1236
+ from tqdm import tqdm
1237
+
1238
+ # Load train qrels
1239
+ train_qrels = load_dataset(
1240
+ "anonymous202501/webfaq-retrieval",
1241
+ "eng-qrels",
1242
+ split="train"
1243
+ )
1244
+
1245
+ # Inspect first qrel
1246
+ print(json.dumps(train_qrels[0], indent=4))
1247
+
1248
+ # Load the corpus (answers)
1249
+ data_corpus = load_dataset(
1250
+ "anonymous202501/webfaq-retrieval",
1251
+ "eng-corpus",
1252
+ split="corpus"
1253
+ )
1254
+ corpus = {
1255
+ d["_id"]: {"title": d["title"], "text": d["text"]} for d in tqdm(data_corpus)
1256
+ }
1257
+
1258
+ # Inspect first document
1259
+ print("Document:")
1260
+ print(json.dumps(corpus[train_qrels[0]["corpus-id"]], indent=4))
1261
+
1262
+ # Load all queries
1263
+ data_queries = load_dataset(
1264
+ "anonymous202501/webfaq-retrieval",
1265
+ "eng-queries",
1266
+ split="queries"
1267
+ )
1268
+ queries = {
1269
+ q["_id"]: q["text"] for q in tqdm(data_queries)
1270
+ }
1271
+
1272
+ # Inspect first query
1273
+ print("Query:")
1274
+ print(json.dumps(queries[train_qrels[0]["query-id"]], indent=4))
1275
+
1276
+ # Keep only those queries with relevance annotations
1277
+ query_ids = set([q["query-id"] for q in train_qrels])
1278
+ queries = {
1279
+ qid: query for qid, query in queries.items() if qid in query_ids
1280
+ }
1281
+ print(f"Number of queries: {len(queries)}")
1282
+ ```
1283
+
1284
+ Below is a code snippet showing how to evaluate retrieval performance using the `mteb` library:
1285
+
1286
+ > **Note**: WebFAQ is not yet available as multilingual task in the `mteb` library. The code snippet below is a placeholder for when it becomes available.
1287
+
1288
+ ```python
1289
+ from mteb import MTEB
1290
+ from mteb.tasks.Retrieval.multilingual.WebFAQRetrieval import WebFAQRetrieval
1291
+
1292
+ # ... Load model ...
1293
+
1294
+ # Load the WebFAQ task
1295
+ task = WebFAQRetrieval()
1296
+ eval_split = "test"
1297
+
1298
+ evaluation = MTEB(tasks=[task])
1299
+ evaluation.run(
1300
+ model,
1301
+ eval_splits=[eval_split],
1302
+ output_folder="output",
1303
+ overwrite_results=True
1304
+ )
1305
+ ```
1306
+
1307
+ ## Considerations
1308
+
1309
+ Please note the following considerations when using the collected QAs:
1310
+
1311
+ - *[Q&A Dataset]* **Risk of Duplicate or Near-Duplicate Content**: The raw Q&A dataset is large and includes minor paraphrases.
1312
+ - *[Retrieval Dataset]* **Sparse Relevance**: As raw FAQ data, each question typically has one “best” (on-page) answer. Additional valid answers may exist on other websites but are not labeled as relevant.
1313
+ - **Language Detection Limitations**: Some QA pairs mix languages, or contain brand names, which can confuse automatic language classification.
1314
+ - **No Guarantee of Factual Accuracy**: Answers reflect the content of the source websites. They may include outdated, biased, or incorrect information.
1315
+ - **Copyright and Privacy**: Please ensure compliance with any applicable laws and the source website’s terms.
1316
+
1317
+ ## License
1318
+
1319
+ The **Collection of WebFAQ Datasets** is shared under [**Creative Commons Attribution 4.0 (CC BY 4.0)**](https://creativecommons.org/licenses/by/4.0/) license.
1320
+
1321
+ > **Note**: The dataset is derived from public webpages in Common Crawl snapshots (2022–2024) and intended for **research purposes**. Each FAQ’s text is published by the original website under their terms. Downstream users should verify any usage constraints from the **original websites** as well as [Common Crawl’s Terms of Use](https://commoncrawl.org/terms-of-use/).
1322
+
1323
+ ## Citation
1324
+
1325
+ If you use this dataset in your research, please consider citing the associated paper:
1326
+
1327
+ ```bibtex
1328
+ @misc{dinzinger2025webfaq,
1329
+ title={WebFAQ: A Multilingual Collection of Natural Q&amp;A Datasets for Dense Retrieval},
1330
+ author={Michael Dinzinger and Laura Caspari and Kanishka Ghosh Dastidar and Jelena Mitrović and Michael Granitzer},
1331
+ year={2025},
1332
+ eprint={2502.20936},
1333
+ archivePrefix={arXiv},
1334
+ primaryClass={cs.CL}
1335
+ }
1336
+ ```
1337
+
1338
+ ## Contact
1339
+
1340
+ For inquiries and feedback, please feel free to contact us via E-Mail ([[email protected]](mailto:[email protected])) or start a discussion on HuggingFace or GitHub.
1341
+
1342
+ ## Acknowledgement
1343
+
1344
+ We thank the Common Crawl and Web Data Commons teams for providing the underlying data, and all contributors who helped shape the WebFAQ project.
1345
+
1346
+ ### Thank you
1347
+
1348
+ We hope the **Collection of WebFAQ Datasets** serves as a valuable resource for your research. Please consider citing it in any publications or projects that use it. If you encounter issues or want to contribute improvements, feel free to get in touch with us on HuggingFace or GitHub.
1349
+
1350
+ Happy researching!