Datasets:

Modalities:
Text
Languages:
Bengali
ArXiv:
Libraries:
Datasets
License:
parquet-converter commited on
Commit
3dc624e
·
1 Parent(s): a18ecb6

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,190 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- language_creators:
5
- - found
6
- multilinguality:
7
- - monolingual
8
- size_categories:
9
- - 100K<n<1M
10
- source_datasets:
11
- - extended
12
- task_categories:
13
- - text-classification
14
- task_ids:
15
- - natural-language-inference
16
- language:
17
- - bn
18
- license:
19
- - cc-by-nc-sa-4.0
20
- ---
21
-
22
- # Dataset Card for `xnli_bn`
23
-
24
- ## Table of Contents
25
- - [Dataset Card for `xnli_bn`](#dataset-card-for-xnli_bn)
26
- - [Table of Contents](#table-of-contents)
27
- - [Dataset Description](#dataset-description)
28
- - [Dataset Summary](#dataset-summary)
29
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
- - [Languages](#languages)
31
- - [Usage](#usage)
32
- - [Dataset Structure](#dataset-structure)
33
- - [Data Instances](#data-instances)
34
- - [Data Fields](#data-fields)
35
- - [Data Splits](#data-splits)
36
- - [Dataset Creation](#dataset-creation)
37
- - [Curation Rationale](#curation-rationale)
38
- - [Source Data](#source-data)
39
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
- - [Who are the source language producers?](#who-are-the-source-language-producers)
41
- - [Annotations](#annotations)
42
- - [Annotation process](#annotation-process)
43
- - [Who are the annotators?](#who-are-the-annotators)
44
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
- - [Considerations for Using the Data](#considerations-for-using-the-data)
46
- - [Social Impact of Dataset](#social-impact-of-dataset)
47
- - [Discussion of Biases](#discussion-of-biases)
48
- - [Other Known Limitations](#other-known-limitations)
49
- - [Additional Information](#additional-information)
50
- - [Dataset Curators](#dataset-curators)
51
- - [Licensing Information](#licensing-information)
52
- - [Citation Information](#citation-information)
53
- - [Contributions](#contributions)
54
-
55
- ## Dataset Description
56
-
57
- - **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert)
58
- - **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204)
59
- - **Point of Contact:** [Tahmid Hasan](mailto:[email protected])
60
-
61
- ### Dataset Summary
62
-
63
- This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
64
- MNLI data used in XNLI and state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
65
-
66
-
67
- ### Supported Tasks and Leaderboards
68
-
69
- [More information needed](https://github.com/csebuetnlp/banglabert)
70
-
71
- ### Languages
72
-
73
- * `Bengali`
74
-
75
- ### Usage
76
- ```python
77
- from datasets import load_dataset
78
- dataset = load_dataset("csebuetnlp/xnli_bn")
79
- ```
80
- ## Dataset Structure
81
-
82
- ### Data Instances
83
-
84
- One example from the dataset is given below in JSON format.
85
- ```
86
- {
87
- "sentence1": "আসলে, আমি এমনকি এই বিষয়ে চিন্তাও করিনি, কিন্তু আমি এত হতাশ হয়ে পড়েছিলাম যে, শেষ পর্যন্ত আমি আবার তার সঙ্গে কথা বলতে শুরু করেছিলাম",
88
- "sentence2": "আমি তার সাথে আবার কথা বলিনি।",
89
- "label": "contradiction"
90
- }
91
- ```
92
-
93
- ### Data Fields
94
-
95
- The data fields are as follows:
96
-
97
- - `sentence1`: a `string` feature indicating the premise.
98
- - `sentence2`: a `string` feature indicating the hypothesis.
99
- - `label`: a classification label, where possible values are `contradiction` (0), `entailment` (1), `neutral` (2) .
100
-
101
- ### Data Splits
102
- | split |count |
103
- |----------|--------|
104
- |`train`| 381449 |
105
- |`validation`| 2419 |
106
- |`test`| 4895 |
107
-
108
-
109
-
110
-
111
- ## Dataset Creation
112
-
113
- The dataset curation procedure was the same as the [XNLI](https://aclanthology.org/D18-1269/) dataset: we translated the [MultiNLI](https://aclanthology.org/N18-1101/) training data using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. All sentences below a similarity threshold of 0.70 were discarded.
114
-
115
- ### Curation Rationale
116
-
117
- [More information needed](https://github.com/csebuetnlp/banglabert)
118
-
119
- ### Source Data
120
-
121
- [XNLI](https://aclanthology.org/D18-1269/)
122
-
123
- #### Initial Data Collection and Normalization
124
-
125
- [More information needed](https://github.com/csebuetnlp/banglabert)
126
-
127
-
128
- #### Who are the source language producers?
129
-
130
- [More information needed](https://github.com/csebuetnlp/banglabert)
131
-
132
-
133
- ### Annotations
134
-
135
- [More information needed](https://github.com/csebuetnlp/banglabert)
136
-
137
-
138
- #### Annotation process
139
-
140
- [More information needed](https://github.com/csebuetnlp/banglabert)
141
-
142
- #### Who are the annotators?
143
-
144
- [More information needed](https://github.com/csebuetnlp/banglabert)
145
-
146
- ### Personal and Sensitive Information
147
-
148
- [More information needed](https://github.com/csebuetnlp/banglabert)
149
-
150
- ## Considerations for Using the Data
151
-
152
- ### Social Impact of Dataset
153
-
154
- [More information needed](https://github.com/csebuetnlp/banglabert)
155
-
156
- ### Discussion of Biases
157
-
158
- [More information needed](https://github.com/csebuetnlp/banglabert)
159
-
160
- ### Other Known Limitations
161
-
162
- [More information needed](https://github.com/csebuetnlp/banglabert)
163
-
164
- ## Additional Information
165
-
166
- ### Dataset Curators
167
-
168
- [More information needed](https://github.com/csebuetnlp/banglabert)
169
-
170
- ### Licensing Information
171
-
172
- Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
173
- ### Citation Information
174
-
175
- If you use the dataset, please cite the following paper:
176
- ```
177
- @misc{bhattacharjee2021banglabert,
178
- title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
179
- author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
180
- year={2021},
181
- eprint={2101.00204},
182
- archivePrefix={arXiv},
183
- primaryClass={cs.CL}
184
- }
185
- ```
186
-
187
-
188
- ### Contributions
189
-
190
- Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"xnli_bn": {"description": "This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of\nMNLI data used in XNLI and state-of-the-art English to Bengali translation model.\n", "citation": "@misc{bhattacharjee2021banglabert,\n title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},\n author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},\n year={2021},\n eprint={2101.00204},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://github.com/csebuetnlp/banglabert", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["contradiction", "entailment", "neutral"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "xnli_bn", "config_name": "xnli_bn", "version": {"version_str": "0.0.1", "description": null, "major": 0, "minor": 0, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 175660643, "num_examples": 381449, "dataset_name": "xnli_bn"}, "test": {"name": "test", "num_bytes": 2127035, "num_examples": 4895, "dataset_name": "xnli_bn"}, "validation": {"name": "validation", "num_bytes": 1046988, "num_examples": 2419, "dataset_name": "xnli_bn"}}, "download_checksums": {"https://huggingface.co/datasets/csebuetnlp/xnli_bn/resolve/main/data/xnli_bn.tar.bz2": {"num_bytes": 21437836, "checksum": "a91b4d3f8433a98fd6251396976b17b2385ef49ffbb207fabe8124fc6b066207"}}, "download_size": 21437836, "post_processing_size": null, "dataset_size": 178834666, "size_in_bytes": 200272502}}
 
 
xnli_bn.py DELETED
@@ -1,85 +0,0 @@
1
- """XNLI Bengali dataset"""
2
- import json
3
- import os
4
-
5
- import datasets
6
-
7
-
8
- _CITATION = """\
9
- @misc{bhattacharjee2021banglabert,
10
- title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
11
- author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
12
- year={2021},
13
- eprint={2101.00204},
14
- archivePrefix={arXiv},
15
- primaryClass={cs.CL}
16
- }
17
- """
18
- _DESCRIPTION = """\
19
- This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
20
- MNLI data used in XNLI and state-of-the-art English to Bengali translation model.
21
- """
22
- _HOMEPAGE = "https://github.com/csebuetnlp/banglabert"
23
- _LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)"
24
- _URL = "https://huggingface.co/datasets/csebuetnlp/xnli_bn/resolve/main/data/xnli_bn.tar.bz2"
25
- _VERSION = datasets.Version("0.0.1")
26
-
27
-
28
- class XnliBn(datasets.GeneratorBasedBuilder):
29
- """XNLI Bengali dataset"""
30
-
31
- BUILDER_CONFIGS = [
32
- datasets.BuilderConfig(
33
- name="xnli_bn",
34
- version=_VERSION,
35
- description=_DESCRIPTION,
36
- )
37
- ]
38
-
39
- def _info(self):
40
- features = datasets.Features(
41
- {
42
- "sentence1": datasets.Value("string"),
43
- "sentence2": datasets.Value("string"),
44
- "label": datasets.features.ClassLabel(names=["contradiction", "entailment", "neutral"]),
45
- }
46
- )
47
- return datasets.DatasetInfo(
48
- description=_DESCRIPTION,
49
- features=features,
50
- homepage=_HOMEPAGE,
51
- license=_LICENSE,
52
- citation=_CITATION,
53
- version=_VERSION,
54
- )
55
-
56
- def _split_generators(self, dl_manager):
57
- """Returns SplitGenerators."""
58
- data_dir = os.path.join(dl_manager.download_and_extract(_URL), "xnli_bn")
59
- return [
60
- datasets.SplitGenerator(
61
- name=datasets.Split.TRAIN,
62
- gen_kwargs={
63
- "filepath": os.path.join(data_dir, "train.jsonl"),
64
- },
65
- ),
66
- datasets.SplitGenerator(
67
- name=datasets.Split.TEST,
68
- gen_kwargs={
69
- "filepath": os.path.join(data_dir, "test.jsonl"),
70
- },
71
- ),
72
- datasets.SplitGenerator(
73
- name=datasets.Split.VALIDATION,
74
- gen_kwargs={
75
- "filepath": os.path.join(data_dir, "validation.jsonl"),
76
- },
77
- ),
78
- ]
79
-
80
- def _generate_examples(self, filepath):
81
- """Yields examples as (key, example) tuples."""
82
- with open(filepath, encoding="utf-8") as f:
83
- for idx_, row in enumerate(f):
84
- data = json.loads(row)
85
- yield idx_, {"sentence1": data["sentence1"], "sentence2": data["sentence2"], "label": data["label"]}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dummy/xnli_bn/0.0.1/dummy_data.zip → xnli_bn/xnli_bn-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7f3afc959023cc7cd6e06556ca78567ca303b11459abd116ffeb6e498c5251df
3
- size 2658
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aaeb46426c194dfea3798e60dbd5501a844b583c57d753cc010fae745915dd27
3
+ size 480486
data/xnli_bn.tar.bz2 → xnli_bn/xnli_bn-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a91b4d3f8433a98fd6251396976b17b2385ef49ffbb207fabe8124fc6b066207
3
- size 21437836
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18ed2cd4a7cade2cacf7ed27e57c9d260e9d670ee2a25877fc66b44239815b4c
3
+ size 74629384
xnli_bn/xnli_bn-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45f6f2e4642d8ab272ac74cf6071bb55f677288b5efeac34063551b3ef1e9624
3
+ size 242200