Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
michaeldinzinger commited on
Commit
e1b43f3
·
verified ·
1 Parent(s): de3dedf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +312 -1
README.md CHANGED
@@ -2805,4 +2805,315 @@ configs:
2805
  path: zho/queries-train.jsonl
2806
  - split: test
2807
  path: zho/queries-test.jsonl
2808
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2805
  path: zho/queries-train.jsonl
2806
  - split: test
2807
  path: zho/queries-test.jsonl
2808
+ ---
2809
+ <h1 align="center">WebFAQ Retrieval Dataset</h1>
2810
+ <h4 align="center">
2811
+ <p>
2812
+ <a href=#overview>Overview</a> |
2813
+ <a href=#details>Details</a> |
2814
+ <a href=#structure>Structure</a> |
2815
+ <a href=#examples>Examples</a> |
2816
+ <a href=#considerations>Considerations</a> |
2817
+ <a href=#license>License</a> |
2818
+ <a href=#citation>Citation</a> |
2819
+ <a href=#contact>Contact</a> |
2820
+ <a href=#acknowledgement>Acknowledgement</a>
2821
+ <p>
2822
+ </h4>
2823
+
2824
+ ## Overview
2825
+
2826
+ The **WebFAQ Retrieval Dataset** is a carefully **filtered and curated subset** of the broader [WebFAQ Q&A Dataset](https://huggingface.co/datasets/PaDaS-Lab/webfaq).
2827
+ It is **purpose-built for Information Retrieval (IR)** tasks, such as **training and evaluating** dense or sparse retrieval models in **multiple languages**.
2828
+
2829
+ Each of the **49 largest** languages from the WebFAQ corpus has been **thoroughly cleaned** and **refined** to ensure an unblurred notion of relevance between a query (question) and its corresponding document (answer). In particular, we applied:
2830
+
2831
+ - **Deduplication** of near-identical questions,
2832
+ - **Semantic consistency checks** for question-answer alignment,
2833
+ - **Train/Test splits** for retrieval experiments.
2834
+
2835
+ ## Details
2836
+
2837
+ ### Languages
2838
+
2839
+ The **WebFAQ Retrieval Dataset** covers **49 languages** from the original WebFAQ corpus, both high-resource as well as low-resource. To ensure diversity, the 49 subsets originate from at least 100 different websites each. A single language comprises a few thousands to a few million of QA pairs after our rigorous filtering steps:
2840
+
2841
+ #### Top 20 Languages
2842
+
2843
+ | Language | # QA pairs |
2844
+ |----------|-----------:|
2845
+ | ara | 143k |
2846
+ | dan | 138k |
2847
+ | deu | 891k |
2848
+ | eng | 5.28M |
2849
+ | fas | 227k |
2850
+ | fra | 570k |
2851
+ | hin | 96.6k |
2852
+ | ind | 96.6k |
2853
+ | ita | 209k |
2854
+ | jpn | 280k |
2855
+ | kor | 79.1k |
2856
+ | nld | 349k |
2857
+ | pol | 179k |
2858
+ | por | 186k |
2859
+ | rus | 346k |
2860
+ | spa | 558k |
2861
+ | swe | 144k |
2862
+ | tur | 110k |
2863
+ | vie | 105k |
2864
+ | zho | 125k |
2865
+
2866
+ <details>
2867
+ <summary> Table of all 49 languages (lexicographical order) </summary>
2868
+
2869
+ | Language | # QA pairs | # Test |
2870
+ |----------|-----------:|-------:|
2871
+ | ara | 143k | 10k |
2872
+ | aze | 4869 | 487 |
2873
+ | ben | 14.3k | 1432 |
2874
+ | bul | 34.7k | 3474 |
2875
+ | cat | 12.7k | 1270 |
2876
+ | ces | 72.3k | 7231 |
2877
+ | dan | 138k | 10k |
2878
+ | deu | 891k | 10k |
2879
+ | ell | 38.5k | 3852 |
2880
+ | eng | 5.28M | 10k |
2881
+ | est | 12.9k | 1290 |
2882
+ | fas | 227k | 10k |
2883
+ | fin | 73.5k | 7355 |
2884
+ | fra | 570k | 10k |
2885
+ | heb | 39.0k | 3896 |
2886
+ | hin | 100k | 10k |
2887
+ | hrv | 5545 | 555 |
2888
+ | hun | 45.3k | 4530 |
2889
+ | ind | 111k | 10k |
2890
+ | isl | 4778 | 478 |
2891
+ | ita | 258k | 10k |
2892
+ | jpn | 309k | 10k |
2893
+ | kat | 2405 | 241 |
2894
+ | kaz | 2995 | 300 |
2895
+ | kor | 102k | 10k |
2896
+ | lav | 13.1k | 1312 |
2897
+ | lit | 18.4k | 1837 |
2898
+ | mar | 7404 | 741 |
2899
+ | msa | 6429 | 643 |
2900
+ | nld | 371k | 10k |
2901
+ | nor | 63.2k | 6324 |
2902
+ | pol | 193k | 10k |
2903
+ | por | 209k | 10k |
2904
+ | ron | 59.9k | 5990 |
2905
+ | rus | 388k | 10k |
2906
+ | slk | 31.5k | 3153 |
2907
+ | slv | 16.2k | 1617 |
2908
+ | spa | 605k | 10k |
2909
+ | sqi | 2077 | 208 |
2910
+ | srp | 5824 | 583 |
2911
+ | swe | 159k | 10k |
2912
+ | tgl | 3697 | 370 |
2913
+ | tha | 47.4k | 4743 |
2914
+ | tur | 145k | 10k |
2915
+ | ukr | 68.5k | 6851 |
2916
+ | urd | 2775 | 278 |
2917
+ | uzb | 1263 | 127 |
2918
+ | vie | 124k | 10k |
2919
+ | zho | 132k | 10k |
2920
+
2921
+ </details>
2922
+
2923
+ <details>
2924
+ <summary> Table of all 49 languages (ordered by size) </summary>
2925
+
2926
+ | Language | # QA pairs | # Test |
2927
+ |----------|-----------:|-------:|
2928
+ | eng | 5.28M | 10k |
2929
+ | deu | 891k | 10k |
2930
+ | spa | 605k | 10k |
2931
+ | fra | 570k | 10k |
2932
+ | rus | 388k | 10k |
2933
+ | nld | 371k | 10k |
2934
+ | jpn | 309k | 10k |
2935
+ | ita | 258k | 10k |
2936
+ | fas | 227k | 10k |
2937
+ | por | 209k | 10k |
2938
+ | pol | 193k | 10k |
2939
+ | swe | 159k | 10k |
2940
+ | tur | 145k | 10k |
2941
+ | ara | 143k | 10k |
2942
+ | dan | 138k | 10k |
2943
+ | zho | 132k | 10k |
2944
+ | vie | 124k | 10k |
2945
+ | ind | 111k | 10k |
2946
+ | kor | 102k | 10k |
2947
+ | hin | 100k | 10k |
2948
+ | fin | 73.5k | 7355 |
2949
+ | ces | 72.3k | 7231 |
2950
+ | ukr | 68.5k | 6851 |
2951
+ | nor | 63.2k | 6324 |
2952
+ | ron | 59.9k | 5990 |
2953
+ | tha | 47.4k | 4743 |
2954
+ | hun | 45.3k | 4530 |
2955
+ | heb | 39.0k | 3896 |
2956
+ | ell | 38.5k | 3852 |
2957
+ | bul | 34.7k | 3474 |
2958
+ | slk | 31.5k | 3153 |
2959
+ | lit | 18.4k | 1837 |
2960
+ | slv | 16.2k | 1617 |
2961
+ | ben | 14.3k | 1432 |
2962
+ | lav | 13.1k | 1312 |
2963
+ | est | 12.9k | 1290 |
2964
+ | cat | 12.7k | 1270 |
2965
+ | mar | 7404 | 741 |
2966
+ | msa | 6429 | 643 |
2967
+ | srp | 5824 | 583 |
2968
+ | hrv | 5545 | 555 |
2969
+ | aze | 4869 | 487 |
2970
+ | isl | 4778 | 478 |
2971
+ | tgl | 3697 | 370 |
2972
+ | kaz | 2995 | 300 |
2973
+ | urd | 2775 | 278 |
2974
+ | kat | 2405 | 241 |
2975
+ | sqi | 2077 | 208 |
2976
+ | uzb | 1263 | 127 |
2977
+
2978
+ </details>
2979
+
2980
+ ## Structure
2981
+
2982
+ Unlike the raw Q&A dataset, **WebFAQ Retrieval** provides explicit **train/test splits** for each of the 49 languages. The general structure for each language is:
2983
+
2984
+ - **Corpus**: A set of unique documents (answers) with IDs and text fields.
2985
+ - **Queries**: A set of question strings, each tied to a document ID for relevance.
2986
+ - **Qrels**: Relevance labels, mapping each question to its relevant document (corresponding answer).
2987
+
2988
+ ### Folder Layout (e.g., for eng)
2989
+
2990
+ ```
2991
+ eng/
2992
+ ├── corpus.jsonl # all unique documents (answers)
2993
+ ├── queries.jsonl # all queries for train/test
2994
+ ├── train.jsonl # relevance annotations for train
2995
+ └── test.jsonl # relevance annotations for test
2996
+ ```
2997
+
2998
+ ## Examples
2999
+
3000
+ Below is a small snippet showing how to load English train/test sets with [🤗 Datasets](https://github.com/huggingface/datasets):
3001
+
3002
+ ```python
3003
+ import json
3004
+ from datasets import load_dataset
3005
+ from tqdm import tqdm
3006
+
3007
+ # Load train qrels
3008
+ train_qrels = load_dataset(
3009
+ "PaDaS-Lab/webfaq-retrieval",
3010
+ "eng-qrels",
3011
+ split="train"
3012
+ )
3013
+
3014
+ # Inspect first qrel
3015
+ print(json.dumps(train_qrels[0], indent=4))
3016
+
3017
+ # Load the corpus (answers)
3018
+ data_corpus = load_dataset(
3019
+ "PaDaS-Lab/webfaq-retrieval",
3020
+ "eng-corpus",
3021
+ split="corpus"
3022
+ )
3023
+ corpus = {
3024
+ d["_id"]: {"title": d["title"], "text": d["text"]} for d in tqdm(data_corpus)
3025
+ }
3026
+
3027
+ # Inspect first document
3028
+ print("Document:")
3029
+ print(json.dumps(corpus[train_qrels[0]["corpus-id"]], indent=4))
3030
+
3031
+ # Load all queries
3032
+ data_queries = load_dataset(
3033
+ "PaDaS-Lab/webfaq-retrieval",
3034
+ "eng-queries",
3035
+ split="queries"
3036
+ )
3037
+ queries = {
3038
+ q["_id"]: q["text"] for q in tqdm(data_queries)
3039
+ }
3040
+
3041
+ # Inspect first query
3042
+ print("Query:")
3043
+ print(json.dumps(queries[train_qrels[0]["query-id"]], indent=4))
3044
+
3045
+ # Keep only those queries with relevance annotations
3046
+ query_ids = set([q["query-id"] for q in train_qrels])
3047
+ queries = {
3048
+ qid: query for qid, query in queries.items() if qid in query_ids
3049
+ }
3050
+ print(f"Number of queries: {len(queries)}")
3051
+ ```
3052
+
3053
+ Below is a code snippet showing how to evaluate retrieval performance using the `mteb` library:
3054
+
3055
+ > **Note**: WebFAQ is not yet available as multilingual task in the `mteb` library. The code snippet below is a placeholder for when it becomes available.
3056
+
3057
+ ```python
3058
+ from mteb import MTEB
3059
+ from mteb.tasks.Retrieval.multilingual.WebFAQRetrieval import WebFAQRetrieval
3060
+
3061
+ # ... Load model ...
3062
+
3063
+ # Load the WebFAQ task
3064
+ task = WebFAQRetrieval()
3065
+ eval_split = "test"
3066
+
3067
+ evaluation = MTEB(tasks=[task])
3068
+ evaluation.run(
3069
+ model,
3070
+ eval_splits=[eval_split],
3071
+ output_folder="output",
3072
+ overwrite_results=True
3073
+ )
3074
+ ```
3075
+
3076
+ ## Considerations
3077
+
3078
+ Please note the following considerations when using the collected QAs:
3079
+
3080
+ - *[Q&A Dataset]* **Risk of Duplicate or Near-Duplicate Content**: The raw Q&A dataset is large and includes minor paraphrases.
3081
+ - *[Retrieval Dataset]* **Sparse Relevance**: As raw FAQ data, each question typically has one “best” (on-page) answer. Additional valid answers may exist on other websites but are not labeled as relevant.
3082
+ - **Language Detection Limitations**: Some QA pairs mix languages, or contain brand names, which can confuse automatic language classification.
3083
+ - **No Guarantee of Factual Accuracy**: Answers reflect the content of the source websites. They may include outdated, biased, or incorrect information.
3084
+ - **Copyright and Privacy**: Please ensure compliance with any applicable laws and the source website’s terms.
3085
+
3086
+ ## License
3087
+
3088
+ The **Collection of WebFAQ Datasets** is shared under [**Creative Commons Attribution 4.0 (CC BY 4.0)**](https://creativecommons.org/licenses/by/4.0/) license.
3089
+
3090
+ > **Note**: The dataset is derived from public webpages in Common Crawl snapshots (2022–2024) and intended for **research purposes**. Each FAQ’s text is published by the original website under their terms. Downstream users should verify any usage constraints from the **original websites** as well as [Common Crawl’s Terms of Use](https://commoncrawl.org/terms-of-use/).
3091
+
3092
+ ## Citation
3093
+
3094
+ If you use this dataset in your research, please consider citing the associated paper:
3095
+
3096
+ ```bibtex
3097
+ @misc{dinzinger2025webfaq,
3098
+ title={WebFAQ: A Multilingual Collection of Natural Q&amp;A Datasets for Dense Retrieval},
3099
+ author={Michael Dinzinger and Laura Caspari and Kanishka Ghosh Dastidar and Jelena Mitrović and Michael Granitzer},
3100
+ year={2025},
3101
+ eprint={2502.20936},
3102
+ archivePrefix={arXiv},
3103
+ primaryClass={cs.CL}
3104
+ }
3105
+ ```
3106
+
3107
+ ## Contact
3108
+
3109
+ For inquiries and feedback, please feel free to contact us via E-Mail ([[email protected]](mailto:[email protected])) or start a discussion on HuggingFace or GitHub.
3110
+
3111
+ ## Acknowledgement
3112
+
3113
+ We thank the Common Crawl and Web Data Commons teams for providing the underlying data, and all contributors who helped shape the WebFAQ project.
3114
+
3115
+ ### Thank you
3116
+
3117
+ We hope the **Collection of WebFAQ Datasets** serves as a valuable resource for your research. Please consider citing it in any publications or projects that use it. If you encounter issues or want to contribute improvements, feel free to get in touch with us on HuggingFace or GitHub.
3118
+
3119
+ Happy researching!