Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
anonymous202501 commited on
Commit
1fc6dd0
·
verified ·
1 Parent(s): b432016

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +177 -1
README.md CHANGED
@@ -1052,4 +1052,180 @@ configs:
1052
  data_files:
1053
  - split: queries
1054
  path: zho/queries.jsonl
1055
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1052
  data_files:
1053
  - split: queries
1054
  path: zho/queries.jsonl
1055
+ ---
1056
+ <h1 align="center">WebFAQ Retrieval Dataset</h1>
1057
+ <h4 align="center">
1058
+ <p>
1059
+ <a href=#overview>Overview</a> |
1060
+ <a href=#details>Details</a> |
1061
+ <a href=#structure>Structure</a> |
1062
+ <a href=#examples>Examples</a> |
1063
+ <a href=#considerations>Considerations</a> |
1064
+ <a href=#license>License</a> |
1065
+ <a href=#citation>Citation</a> |
1066
+ <a href=#contact>Contact</a> |
1067
+ <a href=#acknowledgement>Acknowledgement</a>
1068
+ <p>
1069
+ </h4>
1070
+
1071
+ ## Overview
1072
+
1073
+ The **WebFAQ Retrieval Dataset** is a carefully **filtered and curated subset** of the broader [WebFAQ Q&A Dataset](https://huggingface.co/datasets/anonymous202501/webfaq).
1074
+ It is **purpose-built for Information Retrieval (IR)** tasks, such as **training and evaluating** dense or sparse retrieval models in **multiple languages**.
1075
+
1076
+ Each of the **20 largest** languages from the WebFAQ corpus has been **thoroughly cleaned** and **refined** to ensure an unblurred notion of relevance between a query (question) and its corresponding document (answer). In particular, we applied:
1077
+
1078
+ - **Deduplication** of near-identical questions,
1079
+ - **Semantic consistency checks** for question-answer alignment,
1080
+ - **Train/Test splits** for retrieval experiments.
1081
+
1082
+ ## Details
1083
+
1084
+ ### Languages
1085
+
1086
+ The **WebFAQ Retrieval Dataset** covers **20 high-resource languages** from the original WebFAQ corpus, each comprising tens of thousands to hundreds of thousands of QA pairs after our rigorous filtering steps:
1087
+
1088
+ | Language | # QA pairs |
1089
+ |-------------|----------------|
1090
+ | TBD | |
1091
+
1092
+ ## Structure
1093
+
1094
+ Unlike the raw Q&A dataset, **WebFAQ Retrieval** provides explicit **train/test splits** for each of the 20 languages. The general structure for each language is:
1095
+
1096
+ - **Corpus**: A set of unique documents (answers) with IDs and text fields.
1097
+ - **Queries**: A set of question strings, each tied to a document ID for relevance.
1098
+ - **Qrels**: Relevance labels, mapping each question to its relevant document (corresponding answer).
1099
+
1100
+ ### Folder Layout (e.g., for eng)
1101
+
1102
+ ```
1103
+ eng/
1104
+ ├── corpus.jsonl # all unique documents (answers)
1105
+ ├── queries.jsonl # all queries for train/test
1106
+ ├── train.jsonl # relevance annotations for train
1107
+ ├── test.jsonl # relevance annotations for test
1108
+ └── ...
1109
+ ```
1110
+
1111
+ ## Examples
1112
+
1113
+ Below is a small snippet showing how to load English train/test sets with [🤗 Datasets](https://github.com/huggingface/datasets):
1114
+
1115
+ ```python
1116
+ import json
1117
+ from datasets import load_dataset
1118
+ from tqdm import tqdm
1119
+
1120
+ # Load train qrels
1121
+ train_qrels = load_dataset(
1122
+ "anonymous202501/webfaq-retrieval",
1123
+ "eng-qrels",
1124
+ split="train"
1125
+ )
1126
+
1127
+ # Inspect first qrel
1128
+ print(json.dumps(train_qrels[0], indent=4))
1129
+
1130
+ # Load the corpus (answers)
1131
+ data_corpus = load_dataset(
1132
+ "anonymous202501/webfaq-retrieval",
1133
+ "eng-corpus",
1134
+ split="corpus"
1135
+ )
1136
+ corpus = {
1137
+ d["_id"]: {"title": d["title"], "text": d["text"]} for d in tqdm(data_corpus)
1138
+ }
1139
+
1140
+ # Inspect first document
1141
+ print("Document:")
1142
+ print(json.dumps(corpus[train_qrels[0]["corpus-id"]], indent=4))
1143
+
1144
+ # Load all queries
1145
+ data_queries = load_dataset(
1146
+ "anonymous202501/webfaq-retrieval",
1147
+ "eng-queries",
1148
+ split="queries"
1149
+ )
1150
+ queries = {
1151
+ q["_id"]: q["text"] for q in tqdm(data_queries)
1152
+ }
1153
+
1154
+ # Inspect first query
1155
+ print("Query:")
1156
+ print(json.dumps(queries[train_qrels[0]["query-id"]], indent=4))
1157
+
1158
+ # Keep only those queries with relevance annotations
1159
+ query_ids = set([q["query-id"] for q in train_qrels])
1160
+ queries = {
1161
+ qid: query for qid, query in queries.items() if qid in query_ids
1162
+ }
1163
+ print(f"Number of queries: {len(queries)}")
1164
+ ```
1165
+
1166
+ Below is a code snippet showing how to evaluate retrieval performance using the `mteb` library:
1167
+
1168
+ > **Note**: WebFAQ is not yet available as multilingual task in the `mteb` library. The code snippet below is a placeholder for when it becomes available.
1169
+
1170
+ ```python
1171
+ from mteb import MTEB
1172
+ from mteb.tasks.Retrieval.multilingual.WebFAQRetrieval import WebFAQRetrieval
1173
+
1174
+ # ... Load model ...
1175
+
1176
+ # Load the WebFAQ task
1177
+ task = WebFAQRetrieval()
1178
+ eval_split = "test"
1179
+
1180
+ evaluation = MTEB(tasks=[task])
1181
+ evaluation.run(
1182
+ model,
1183
+ eval_splits=[eval_split],
1184
+ output_folder="output",
1185
+ overwrite_results=True
1186
+ )
1187
+ ```
1188
+
1189
+ ## Considerations
1190
+
1191
+ Please note the following considerations when using the collected QAs:
1192
+
1193
+ - *[Q&A Dataset]* **Risk of Duplicate or Near-Duplicate Content**: The raw Q&A dataset is large and includes minor paraphrases.
1194
+ - *[Retrieval Dataset]* **Sparse Relevance**: As raw FAQ data, each question typically has one “best” (on-page) answer. Additional valid answers may exist on other websites but are not labeled as relevant.
1195
+ - **Language Detection Limitations**: Some QA pairs mix languages, or contain brand names, which can confuse automatic language classification.
1196
+ - **No Guarantee of Factual Accuracy**: Answers reflect the content of the source websites. They may include outdated, biased, or incorrect information.
1197
+ - **Copyright and Privacy**: Please ensure compliance with any applicable laws and the source website’s terms.
1198
+
1199
+ ## License
1200
+
1201
+ The **Collection of WebFAQ Datasets** is shared under [**Creative Commons Attribution 4.0 (CC BY 4.0)**](https://creativecommons.org/licenses/by/4.0/) license.
1202
+
1203
+ > **Note**: The dataset is derived from public webpages in Common Crawl snapshots (2022–2024) and intended for **research purposes**. Each FAQ’s text is published by the original website under their terms. Downstream users should verify any usage constraints from the **original websites** as well as [Common Crawl’s Terms of Use](https://commoncrawl.org/terms-of-use/).
1204
+
1205
+ ## Citation
1206
+
1207
+ If you use this dataset in your research, please consider citing the associated paper:
1208
+
1209
+ ```bibtex
1210
+ @misc{webfaq2025,
1211
+ title = {WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense Retrieval},
1212
+ author = {Anonymous Author(s)},
1213
+ year = {2025},
1214
+ howpublished = {...},
1215
+ note = {Under review}
1216
+ }
1217
+ ```
1218
+
1219
+ ## Contact
1220
+
1221
+ TBD
1222
+
1223
+ ## Acknowledgement
1224
+
1225
+ We thank the Common Crawl and Web Data Commons teams for providing the underlying data, and all contributors who helped shape the WebFAQ project.
1226
+
1227
+ ### Thank you
1228
+
1229
+ We hope the **Collection of WebFAQ Datasets** serves as a valuable resource for your research. Please consider citing it in any publications or projects that use it. If you encounter issues or want to contribute improvements, feel free to get in touch with us on HuggingFace or GitHub.
1230
+
1231
+ Happy researching!