Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
anonymous202501 commited on
Commit
6abf63d
·
verified ·
1 Parent(s): 4108977

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +195 -1
README.md CHANGED
@@ -1052,4 +1052,198 @@ configs:
1052
  data_files:
1053
  - split: queries
1054
  path: zho/queries.jsonl
1055
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1052
  data_files:
1053
  - split: queries
1054
  path: zho/queries.jsonl
1055
+ ---
1056
+ <h1 align="center">WebFAQ Retrieval Dataset</h1>
1057
+ <h4 align="center">
1058
+ <p>
1059
+ <a href=#overview>Overview</a> |
1060
+ <a href=#details>Details</a> |
1061
+ <a href=#structure>Structure</a> |
1062
+ <a href=#examples>Examples</a> |
1063
+ <a href=#considerations>Considerations</a> |
1064
+ <a href=#license>License</a> |
1065
+ <a href=#citation>Citation</a> |
1066
+ <a href=#contact>Contact</a> |
1067
+ <a href=#acknowledgement>Acknowledgement</a>
1068
+ <p>
1069
+ </h4>
1070
+
1071
+ ## Overview
1072
+
1073
+ The **WebFAQ Retrieval Dataset** is a carefully **filtered and curated subset** of the broader [WebFAQ Q&A Dataset](https://huggingface.co/datasets/anonymous202501/webfaq).
1074
+ It is **purpose-built for Information Retrieval (IR)** tasks, such as **training and evaluating** dense or sparse retrieval models in **multiple languages**.
1075
+
1076
+ Each of the **20 largest** languages from the WebFAQ corpus has been **thoroughly cleaned** and **refined** to ensure an unblurred notion of relevance between a query (question) and its corresponding document (answer). In particular, we applied:
1077
+
1078
+ - **Deduplication** of near-identical questions,
1079
+ - **Semantic consistency checks** for question-answer alignment,
1080
+ - **Train/Test splits** for retrieval experiments.
1081
+
1082
+ ## Details
1083
+
1084
+ ### Languages
1085
+
1086
+ The **WebFAQ Retrieval Dataset** covers **20 high-resource languages** from the original WebFAQ corpus, each comprising tens of thousands to hundreds of thousands of QA pairs after our rigorous filtering steps:
1087
+
1088
+ | Language | # QA pairs |
1089
+ |----------|-----------:|
1090
+ | ara | 143k |
1091
+ | dan | 138k |
1092
+ | deu | 891k |
1093
+ | eng | 5.28M |
1094
+ | fas | 227k |
1095
+ | fra | 570k |
1096
+ | hin | 96.6k |
1097
+ | ind | 96.6k |
1098
+ | ita | 209k |
1099
+ | jpn | 280k |
1100
+ | kor | 79.1k |
1101
+ | nld | 349k |
1102
+ | pol | 179k |
1103
+ | por | 186k |
1104
+ | rus | 346k |
1105
+ | spa | 558k |
1106
+ | swe | 144k |
1107
+ | tur | 110k |
1108
+ | vie | 105k |
1109
+ | zho | 125k |
1110
+
1111
+ ## Structure
1112
+
1113
+ Unlike the raw Q&A dataset, **WebFAQ Retrieval** provides explicit **train/test splits** for each of the 20 languages. The general structure for each language is:
1114
+
1115
+ - **Corpus**: A set of unique documents (answers) with IDs and text fields.
1116
+ - **Queries**: A set of question strings, each tied to a document ID for relevance.
1117
+ - **Qrels**: Relevance labels, mapping each question to its relevant document (corresponding answer).
1118
+
1119
+ ### Folder Layout (e.g., for eng)
1120
+
1121
+ ```
1122
+ eng/
1123
+ ├── corpus.jsonl # all unique documents (answers)
1124
+ ├── queries.jsonl # all queries for train/test
1125
+ ├── train.jsonl # relevance annotations for train
1126
+ └── test.jsonl # relevance annotations for test
1127
+ ```
1128
+
1129
+ ## Examples
1130
+
1131
+ Below is a small snippet showing how to load English train/test sets with [🤗 Datasets](https://github.com/huggingface/datasets):
1132
+
1133
+ ```python
1134
+ import json
1135
+ from datasets import load_dataset
1136
+ from tqdm import tqdm
1137
+
1138
+ # Load train qrels
1139
+ train_qrels = load_dataset(
1140
+ "anonymous202501/webfaq-retrieval",
1141
+ "eng-qrels",
1142
+ split="train"
1143
+ )
1144
+
1145
+ # Inspect first qrel
1146
+ print(json.dumps(train_qrels[0], indent=4))
1147
+
1148
+ # Load the corpus (answers)
1149
+ data_corpus = load_dataset(
1150
+ "anonymous202501/webfaq-retrieval",
1151
+ "eng-corpus",
1152
+ split="corpus"
1153
+ )
1154
+ corpus = {
1155
+ d["_id"]: {"title": d["title"], "text": d["text"]} for d in tqdm(data_corpus)
1156
+ }
1157
+
1158
+ # Inspect first document
1159
+ print("Document:")
1160
+ print(json.dumps(corpus[train_qrels[0]["corpus-id"]], indent=4))
1161
+
1162
+ # Load all queries
1163
+ data_queries = load_dataset(
1164
+ "anonymous202501/webfaq-retrieval",
1165
+ "eng-queries",
1166
+ split="queries"
1167
+ )
1168
+ queries = {
1169
+ q["_id"]: q["text"] for q in tqdm(data_queries)
1170
+ }
1171
+
1172
+ # Inspect first query
1173
+ print("Query:")
1174
+ print(json.dumps(queries[train_qrels[0]["query-id"]], indent=4))
1175
+
1176
+ # Keep only those queries with relevance annotations
1177
+ query_ids = set([q["query-id"] for q in train_qrels])
1178
+ queries = {
1179
+ qid: query for qid, query in queries.items() if qid in query_ids
1180
+ }
1181
+ print(f"Number of queries: {len(queries)}")
1182
+ ```
1183
+
1184
+ Below is a code snippet showing how to evaluate retrieval performance using the `mteb` library:
1185
+
1186
+ > **Note**: WebFAQ is not yet available as multilingual task in the `mteb` library. The code snippet below is a placeholder for when it becomes available.
1187
+
1188
+ ```python
1189
+ from mteb import MTEB
1190
+ from mteb.tasks.Retrieval.multilingual.WebFAQRetrieval import WebFAQRetrieval
1191
+
1192
+ # ... Load model ...
1193
+
1194
+ # Load the WebFAQ task
1195
+ task = WebFAQRetrieval()
1196
+ eval_split = "test"
1197
+
1198
+ evaluation = MTEB(tasks=[task])
1199
+ evaluation.run(
1200
+ model,
1201
+ eval_splits=[eval_split],
1202
+ output_folder="output",
1203
+ overwrite_results=True
1204
+ )
1205
+ ```
1206
+
1207
+ ## Considerations
1208
+
1209
+ Please note the following considerations when using the collected QAs:
1210
+
1211
+ - *[Q&A Dataset]* **Risk of Duplicate or Near-Duplicate Content**: The raw Q&A dataset is large and includes minor paraphrases.
1212
+ - *[Retrieval Dataset]* **Sparse Relevance**: As raw FAQ data, each question typically has one “best” (on-page) answer. Additional valid answers may exist on other websites but are not labeled as relevant.
1213
+ - **Language Detection Limitations**: Some QA pairs mix languages, or contain brand names, which can confuse automatic language classification.
1214
+ - **No Guarantee of Factual Accuracy**: Answers reflect the content of the source websites. They may include outdated, biased, or incorrect information.
1215
+ - **Copyright and Privacy**: Please ensure compliance with any applicable laws and the source website’s terms.
1216
+
1217
+ ## License
1218
+
1219
+ The **Collection of WebFAQ Datasets** is shared under [**Creative Commons Attribution 4.0 (CC BY 4.0)**](https://creativecommons.org/licenses/by/4.0/) license.
1220
+
1221
+ > **Note**: The dataset is derived from public webpages in Common Crawl snapshots (2022–2024) and intended for **research purposes**. Each FAQ’s text is published by the original website under their terms. Downstream users should verify any usage constraints from the **original websites** as well as [Common Crawl’s Terms of Use](https://commoncrawl.org/terms-of-use/).
1222
+
1223
+ ## Citation
1224
+
1225
+ If you use this dataset in your research, please consider citing the associated paper:
1226
+
1227
+ ```bibtex
1228
+ @misc{webfaq2025,
1229
+ title = {WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense Retrieval},
1230
+ author = {Anonymous Author(s)},
1231
+ year = {2025},
1232
+ howpublished = {...},
1233
+ note = {Under review}
1234
+ }
1235
+ ```
1236
+
1237
+ ## Contact
1238
+
1239
+ TBD
1240
+
1241
+ ## Acknowledgement
1242
+
1243
+ We thank the Common Crawl and Web Data Commons teams for providing the underlying data, and all contributors who helped shape the WebFAQ project.
1244
+
1245
+ ### Thank you
1246
+
1247
+ We hope the **Collection of WebFAQ Datasets** serves as a valuable resource for your research. Please consider citing it in any publications or projects that use it. If you encounter issues or want to contribute improvements, feel free to get in touch with us on HuggingFace or GitHub.
1248
+
1249
+ Happy researching!