Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
anonymous202501 commited on
Commit
9dcbde2
·
verified ·
1 Parent(s): 5062e72

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +158 -1
README.md CHANGED
@@ -30091,4 +30091,161 @@ configs:
30091
  data_files:
30092
  - split: default
30093
  path: data/vie_zho.jsonl
30094
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30091
  data_files:
30092
  - split: default
30093
  path: data/vie_zho.jsonl
30094
+ ---
30095
+ <h1 align="center">WebFAQ Bilingual Datasets (Bitexts)</h1>
30096
+ <h4 align="center">
30097
+ <p>
30098
+ <a href=#overview>Overview</a> |
30099
+ <a href=#details>Details</a> |
30100
+ <a href=#structure>Structure</a> |
30101
+ <a href=#examples>Examples</a> |
30102
+ <a href=#considerations>Considerations</a> |
30103
+ <a href=#license>License</a> |
30104
+ <a href=#citation>Citation</a> |
30105
+ <a href=#contact>Contact</a> |
30106
+ <a href=#acknowledgement>Acknowledgement</a>
30107
+ <p>
30108
+ </h4>
30109
+
30110
+ ## Overview
30111
+
30112
+ The **WebFAQ Bilingual Datasets** (a.k.a. **Bitexts**) are derived from the [WebFAQ Q&A Dataset](https://huggingface.co/datasets/anonymous202501/webfaq), but instead of monolingual question-answer (QA) pairs, each entry here contains **aligned QA pairs** in **two different languages**. These alignments are created via **state-of-the-art bitext mining** techniques (cross-lingual sentence embeddings) and an **automated translation evaluation** step using an LLM (GPT-4o-mini).
30113
+
30114
+ The resulting bitext corpora span **over 1,000 language pairs** (from the original 75 languages in WebFAQ), offering a **high-quality resource** for **machine translation, cross-lingual IR**, and **bitext mining** research.
30115
+
30116
+ **Why is it useful?**
30117
+ - Curated **bilingual** dataset with QA pairs aligned across languages.
30118
+ - Large coverage: **1.5 million** aligned question-answer pairs.
30119
+ - Facilitates **training/testing of cross-lingual models** (e.g., for sentence embedding, CLIR, or neural machine translation).
30120
+
30121
+ **Background**
30122
+ - Aligned data extracted from **FAQ pages** on the web annotated with [schema.org FAQPage](https://schema.org/FAQPage).
30123
+ - Built using [Web Data Commons](https://webdatacommons.org) outputs (Oct 2022 – Oct 2024 Common Crawl snapshots).
30124
+ - Aligned via [LaBSE](https://aclanthology.org/2022.acl-long.62) embeddings and GPT-based translation checks, yielding **~95% precision** on the sampled evaluation.
30125
+
30126
+ ## Details
30127
+
30128
+ ### Language Coverage
30129
+
30130
+ Each of the **bilingual datasets** corresponds to a specific language pair, e.g., `eng-deu`, `eng-fra`, etc. In total, there are over **1,000** language combinations, each containing **≥100** aligned QA pairs (with the largest subsets including tens of thousands of pairs).
30131
+
30132
+ - The top frequent language pairs include `deu-eng`, `eng-fra`, `eng-spa`, each with **30k+** aligned samples.
30133
+ - Less frequent language pairs still have **≥100** QA alignments, making them suitable for smaller-scale bilingual tasks.
30134
+
30135
+ ### Construction Method
30136
+ 1. **Extraction of QAs** from WebFAQ (removing near-duplicates).
30137
+ 2. **Computation of LaBSE embeddings** for each QA text.
30138
+ 3. **Similarity-based candidate matching** within the **same website** but across different languages.
30139
+ 4. **Automated QA alignment validation**: GPT-based translation scoring to prune low-quality matches, targeting ~95% precision.
30140
+
30141
+ ## Structure
30142
+
30143
+ Each dataset file focuses on **one language pair**, e.g. `eng-deu`. The typical row structure includes:
30144
+
30145
+ - **`origin`**: Specifies the FAQ's website through scheme, host, and optionally, port.
30146
+ - **`labse_similarity`**: LaBSE similarity score between the concatenated text of question and answer in the two languages.
30147
+ - **`url`**: URL of the webpage where the QA pair was extracted.
30148
+ - **`question1`**: Question in the first language.
30149
+ - **`question2`**: Question in the second language.
30150
+ - **`answer1`**: Answer in the first language.
30151
+ - **`answer2`**: Answer in the second language.
30152
+ - **`details`**: Additional metadata such as `urls`, `topics`, or `question types` of the two original QA pairs.
30153
+
30154
+ > **Note**: The bilingual dataset does not include an official train/validation/test split. If you require such splits (e.g., for training models), you can create them programmatically. Furthermore, the language pair is not given as a field in the dataset, as it is specified through the selected subset. The topic and question type are given only for those QAs in languages among the 49 languages with ≥100 websites.
30155
+
30156
+ ## Examples
30157
+
30158
+ A sample row in JSON-like form (e.g., `eng-fra` dataset) might look like:
30159
+
30160
+ ```json
30161
+ {
30162
+ "origin": "http://hanting.airporthotelshanghai.com",
30163
+ "labse_similarity": 0.946659,
30164
+ "question1": "Wieviel kostet der Aufenthalt in der Hanting Express Shanghai Pudong Airport?",
30165
+ "question2": "Hanting Express Shanghai Pudong Airportの宿泊料金はいくらですか?",
30166
+ "answer1": "Die Preise beginnen bei CNY264, Dies hängt vom Zimmertyp und dem Datum ab.",
30167
+ "answer2": "宿泊料金はCNY264から、部屋のタイプと日付によって異なります。",
30168
+ "details": {
30169
+ "urls": ["http://hanting.airporthotelshanghai.com/de/", "http://hanting.airporthotelshanghai.com/ja/"],
30170
+ "topics": [ "Traveling and Hospitality", "Traveling and Hospitality"],
30171
+ "question_types": ["What", "Is, are, do, does"]
30172
+ }
30173
+ }
30174
+ ```
30175
+
30176
+ **Loading with 🤗 Datasets** (pseudo-example):
30177
+ ```python
30178
+ from datasets import load_dataset
30179
+
30180
+ # e.g. load the 'eng-deu' bitext subset
30181
+ dataset = load_dataset("anonymous202501/webfaq-bitexts", "deu-jpn")["default"]
30182
+
30183
+ print(dataset[0])
30184
+ # Example output:
30185
+ # {
30186
+ # 'origin': 'http://hanting.airporthotelshanghai.com',
30187
+ # 'labse_similarity': 0.946659,
30188
+ # 'question1': 'Wieviel kostet der Aufenthalt in der Hanting Express Shanghai Pudong Airport?',
30189
+ # 'question2': 'Hanting Express Shanghai Pudong Airportの宿泊料金はいくらですか?',
30190
+ # 'answer1': 'Die Preise beginnen bei CNY264, Dies hängt vom Zimmertyp und dem Datum ab.',
30191
+ # 'answer2': '宿泊料金はCNY264から、部屋のタイプと日付によって異なります。',
30192
+ # 'details': {
30193
+ # 'urls': ['http://hanting.airporthotelshanghai.com/de/', 'http://hanting.airporthotelshanghai.com/ja/'],
30194
+ # 'topics': ['Traveling and Hospitality', 'Traveling and Hospitality'],
30195
+ # 'question_types': ['What', 'Is, are, do, does']
30196
+ # }
30197
+ # }
30198
+ ```
30199
+
30200
+ ## Considerations
30201
+
30202
+ Please note the following considerations when using the collected QAs:
30203
+
30204
+ - *[Q&A Dataset]* **Risk of Duplicate or Near-Duplicate Content**: The raw Q&A dataset is large and includes minor paraphrases.
30205
+ - *[Retrieval Dataset]* **Sparse Relevance**: As raw FAQ data, each question typically has one “best” (on-page) answer. Additional valid answers may exist on other websites but are not labeled as relevant.
30206
+ - **Language Detection Limitations**: Some QA pairs mix languages, or contain brand names, which can confuse automatic language classification.
30207
+ - **No Guarantee of Factual Accuracy**: Answers reflect the content of the source websites. They may include outdated, biased, or incorrect information.
30208
+ - **Copyright and Privacy**: Please ensure compliance with any applicable laws and the source website’s terms.
30209
+
30210
+ #### Additional remarks
30211
+
30212
+ 1. **Translation Quality**
30213
+ - While GPT-based scoring ensures ~95% precision, noise may still exist.
30214
+ - Some FAQ pairs could be partially relevant or partially aligned.
30215
+
30216
+ 2. **Unbalanced Pair Sizes**
30217
+ - Large pairs (e.g., `eng-deu`) contain more data than smaller ones (e.g., `fas-lit`).
30218
+
30219
+ ## License
30220
+
30221
+ The **Collection of WebFAQ Datasets** is shared under [**Creative Commons Attribution 4.0 (CC BY 4.0)**](https://creativecommons.org/licenses/by/4.0/) license.
30222
+
30223
+ > **Note**: The dataset is derived from public webpages in Common Crawl snapshots (2022–2024) and intended for **research purposes**. Each FAQ’s text is published by the original website under their terms. Downstream users should verify any usage constraints from the **original websites** as well as [Common Crawl’s Terms of Use](https://commoncrawl.org/terms-of-use/).
30224
+
30225
+ ## Citation
30226
+
30227
+ If you use this dataset in your research, please consider citing the associated paper:
30228
+
30229
+ ```bibtex
30230
+ @misc{webfaq2025,
30231
+ title = {WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense Retrieval},
30232
+ author = {Anonymous Author(s)},
30233
+ year = {2025},
30234
+ howpublished = {...},
30235
+ note = {Under review}
30236
+ }
30237
+ ```
30238
+
30239
+ ## Contact
30240
+
30241
+ TBD
30242
+
30243
+ ## Acknowledgement
30244
+
30245
+ We thank the Common Crawl and Web Data Commons teams for providing the underlying data, and all contributors who helped shape the WebFAQ project.
30246
+
30247
+ ### Thank you
30248
+
30249
+ We hope the **Collection of WebFAQ Datasets** serves as a valuable resource for your research. Please consider citing it in any publications or projects that use it. If you encounter issues or want to contribute improvements, feel free to get in touch with us on HuggingFace or GitHub.
30250
+
30251
+ Happy researching!