Datasets:
Formats:
parquet
Size:
10K - 100K
ArXiv:
Tags:
spoken language understanding
slot filling
intent classification
speech translation
speaker identification
License:
Update README.md
Browse files
README.md
CHANGED
@@ -2256,7 +2256,7 @@ configs:
|
|
2256 |
# Speech-MASSIVE
|
2257 |
|
2258 |
## Dataset Description
|
2259 |
-
Speech-MASSIVE is a multilingual Spoken Language Understanding (SLU) dataset comprising the speech counterpart for a portion of the [MASSIVE](https://aclanthology.org/2023.acl-long.235) textual corpus. Speech-MASSIVE covers 12 languages (Arabic, German, Spanish, French, Hungarian, Korean, Dutch, Polish, European Portuguese, Russian, Turkish, and Vietnamese) from different families and inherits from MASSIVE the annotations for the intent prediction and slot-filling tasks. MASSIVE utterances' labels span 18 domains, with 60 intents and 55 slots. Full train split is provided for French and German, and for all the 12 languages (including French and German), we provide few-shot train,
|
2260 |
|
2261 |
Our extension is prompted by the scarcity of massively multilingual SLU datasets and the growing need for versatile speech datasets to assess foundation models (LLMs, speech encoders) across diverse languages and tasks. To facilitate speech technology advancements, we release Speech-MASSIVE publicly available with [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
|
2262 |
|
@@ -2264,49 +2264,49 @@ Speech-MASSIVE is accepted at INTERSPEECH 2024 (Kos, GREECE).
|
|
2264 |
|
2265 |
|
2266 |
## Dataset Summary
|
2267 |
-
- `
|
2268 |
- `test`: test split available for all the 12 languages
|
2269 |
- `train_115`: few-shot split available for all the 12 languages (all 115 samples are cross-lingually aligned)
|
2270 |
- `train`: train split available for French (fr-FR) and German (de-DE)
|
2271 |
|
2272 |
| lang | split | # sample | # hrs | total # spk </br>(Male/Female/Unidentified) |
|
2273 |
|:---:|:---:|:---:|:---:|:---:|
|
2274 |
-
| ar-SA |
|
2275 |
| | test | 2974 | 3.23 | 37 (15/17/5) |
|
2276 |
| | train_115 | 115 | 0.14 | 8 (4/4/0) |
|
2277 |
-
| de-DE |
|
2278 |
| | test | 2974 | 3.41 | 82 (36/36/10) |
|
2279 |
| | train | 11514 | 12.61 | 117 (50/63/4) |
|
2280 |
| | train_115 | 115 | 0.15 | 7 (3/4/0) |
|
2281 |
-
| es-ES |
|
2282 |
| | test | 2974 | 3.61 | 85 (37/33/15) |
|
2283 |
| | train_115 | 115 | 0.13 | 7 (3/4/0) |
|
2284 |
-
| fr-FR |
|
2285 |
| | test | 2974 | 2.65 | 75 (31/35/9) |
|
2286 |
| | train | 11514 | 12.42 | 103 (50/52/1) |
|
2287 |
| | train_115 | 115 | 0.12 | 103 (50/52/1) |
|
2288 |
-
| hu-HU |
|
2289 |
| | test | 2974 | 3.30 | 55 (25/24/6) |
|
2290 |
| | train_115 | 115 | 0.12 | 8 (3/4/1) |
|
2291 |
-
| ko-KR |
|
2292 |
| | test | 2974 | 2.66 | 31 (10/18/3) |
|
2293 |
| | train_115 | 115 | 0.14 | 8 (4/4/0) |
|
2294 |
-
| nl-NL |
|
2295 |
| | test | 2974 | 3.30 | 100 (48/49/3) |
|
2296 |
| | train_115 | 115 | 0.12 | 7 (3/4/0) |
|
2297 |
-
| pl-PL |
|
2298 |
| | test | 2974 | 3.21 | 151 (73/71/7) |
|
2299 |
| | train_115 | 115 | 0.10 | 7 (3/4/0) |
|
2300 |
-
| pt-PT |
|
2301 |
| | test | 2974 | 3.25 | 102 (48/50/4) |
|
2302 |
| | train_115 | 115 | 0.12 | 8 (4/4/0) |
|
2303 |
-
| ru-RU |
|
2304 |
| | test | 2974 | 3.44 | 51 (25/23/3) |
|
2305 |
| | train_115 | 115 | 0.12 | 7 (3/4/0) |
|
2306 |
-
| tr-TR |
|
2307 |
| | test | 2974 | 3.00 | 42 (17/18/7) |
|
2308 |
| | train_115 | 115 | 0.11 | 6 (3/3/0) |
|
2309 |
-
| vi-VN |
|
2310 |
| | test | 2974 | 3.23 | 30 (11/14/5) |
|
2311 |
|| train_115 | 115 | 0.11 | 7 (2/4/1) |
|
2312 |
|
@@ -2322,14 +2322,14 @@ For example, to download the French config, simply specify the corresponding lan
|
|
2322 |
```python
|
2323 |
from datasets import load_dataset
|
2324 |
|
2325 |
-
speech_massive_fr_train = load_dataset("FBK-MT/Speech-MASSIVE", "fr-FR", split="train"
|
2326 |
```
|
2327 |
|
2328 |
In case you don't have enough space in the machine, you can stream dataset by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
2329 |
```python
|
2330 |
from datasets import load_dataset
|
2331 |
|
2332 |
-
speech_massive_de_train = load_dataset("FBK-MT/Speech-MASSIVE", "de-DE", split="train", streaming=True
|
2333 |
list(speech_massive_de_train.take(2))
|
2334 |
```
|
2335 |
|
@@ -2338,7 +2338,7 @@ And then access each split.
|
|
2338 |
```python
|
2339 |
from datasets import load_dataset
|
2340 |
|
2341 |
-
speech_massive = load_dataset("FBK-MT/Speech-MASSIVE", "all"
|
2342 |
multilingual_validation = speech_massive['validation']
|
2343 |
```
|
2344 |
|
@@ -2347,13 +2347,13 @@ Or you can load dataset's all the splits per language to separate languages more
|
|
2347 |
from datasets import load_dataset, interleave_datasets, concatenate_datasets
|
2348 |
|
2349 |
# creating full train set by interleaving between German and French
|
2350 |
-
speech_massive_de = load_dataset("FBK-MT/Speech-MASSIVE", "de-DE"
|
2351 |
-
speech_massive_fr = load_dataset("FBK-MT/Speech-MASSIVE", "fr-FR"
|
2352 |
speech_massive_train_de_fr = interleave_datasets([speech_massive_de['train'], speech_massive_fr['train']])
|
2353 |
|
2354 |
# creating train_115 few-shot set by concatenating Korean and Russian
|
2355 |
-
speech_massive_ko = load_dataset("FBK-MT/Speech-MASSIVE", "ko-KR"
|
2356 |
-
speech_massive_ru = load_dataset("FBK-MT/Speech-MASSIVE", "ru-RU"
|
2357 |
speech_massive_train_115_ko_ru = concatenate_datasets([speech_massive_ko['train_115'], speech_massive_ru['train_115']])
|
2358 |
```
|
2359 |
|
|
|
2256 |
# Speech-MASSIVE
|
2257 |
|
2258 |
## Dataset Description
|
2259 |
+
Speech-MASSIVE is a multilingual Spoken Language Understanding (SLU) dataset comprising the speech counterpart for a portion of the [MASSIVE](https://aclanthology.org/2023.acl-long.235) textual corpus. Speech-MASSIVE covers 12 languages (Arabic, German, Spanish, French, Hungarian, Korean, Dutch, Polish, European Portuguese, Russian, Turkish, and Vietnamese) from different families and inherits from MASSIVE the annotations for the intent prediction and slot-filling tasks. MASSIVE utterances' labels span 18 domains, with 60 intents and 55 slots. Full train split is provided for French and German, and for all the 12 languages (including French and German), we provide few-shot train, validation, test splits. Few-shot train (115 examples) covers all 18 domains, 60 intents, and 55 slots (including empty slots).
|
2260 |
|
2261 |
Our extension is prompted by the scarcity of massively multilingual SLU datasets and the growing need for versatile speech datasets to assess foundation models (LLMs, speech encoders) across diverse languages and tasks. To facilitate speech technology advancements, we release Speech-MASSIVE publicly available with [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
|
2262 |
|
|
|
2264 |
|
2265 |
|
2266 |
## Dataset Summary
|
2267 |
+
- `validation`: validation split available for all the 12 languages
|
2268 |
- `test`: test split available for all the 12 languages
|
2269 |
- `train_115`: few-shot split available for all the 12 languages (all 115 samples are cross-lingually aligned)
|
2270 |
- `train`: train split available for French (fr-FR) and German (de-DE)
|
2271 |
|
2272 |
| lang | split | # sample | # hrs | total # spk </br>(Male/Female/Unidentified) |
|
2273 |
|:---:|:---:|:---:|:---:|:---:|
|
2274 |
+
| ar-SA | validation | 2033 | 2.12 | 36 (22/14/0) |
|
2275 |
| | test | 2974 | 3.23 | 37 (15/17/5) |
|
2276 |
| | train_115 | 115 | 0.14 | 8 (4/4/0) |
|
2277 |
+
| de-DE | validation | 2033 | 2.33 | 68 (35/32/1) |
|
2278 |
| | test | 2974 | 3.41 | 82 (36/36/10) |
|
2279 |
| | train | 11514 | 12.61 | 117 (50/63/4) |
|
2280 |
| | train_115 | 115 | 0.15 | 7 (3/4/0) |
|
2281 |
+
| es-ES | validation | 2033 | 2.53 | 109 (51/53/5) |
|
2282 |
| | test | 2974 | 3.61 | 85 (37/33/15) |
|
2283 |
| | train_115 | 115 | 0.13 | 7 (3/4/0) |
|
2284 |
+
| fr-FR | validation | 2033 | 2.20 | 55 (26/26/3) |
|
2285 |
| | test | 2974 | 2.65 | 75 (31/35/9) |
|
2286 |
| | train | 11514 | 12.42 | 103 (50/52/1) |
|
2287 |
| | train_115 | 115 | 0.12 | 103 (50/52/1) |
|
2288 |
+
| hu-HU | validation | 2033 | 2.27 | 69 (33/33/3) |
|
2289 |
| | test | 2974 | 3.30 | 55 (25/24/6) |
|
2290 |
| | train_115 | 115 | 0.12 | 8 (3/4/1) |
|
2291 |
+
| ko-KR | validation | 2033 | 2.12 | 21 (8/13/0) |
|
2292 |
| | test | 2974 | 2.66 | 31 (10/18/3) |
|
2293 |
| | train_115 | 115 | 0.14 | 8 (4/4/0) |
|
2294 |
+
| nl-NL | validation | 2033 | 2.14 | 37 (17/19/1) |
|
2295 |
| | test | 2974 | 3.30 | 100 (48/49/3) |
|
2296 |
| | train_115 | 115 | 0.12 | 7 (3/4/0) |
|
2297 |
+
| pl-PL | validation | 2033 | 2.24 | 105 (50/52/3) |
|
2298 |
| | test | 2974 | 3.21 | 151 (73/71/7) |
|
2299 |
| | train_115 | 115 | 0.10 | 7 (3/4/0) |
|
2300 |
+
| pt-PT | validation | 2033 | 2.20 | 107 (51/53/3) |
|
2301 |
| | test | 2974 | 3.25 | 102 (48/50/4) |
|
2302 |
| | train_115 | 115 | 0.12 | 8 (4/4/0) |
|
2303 |
+
| ru-RU | validation | 2033 | 2.25 | 40 (7/31/2) |
|
2304 |
| | test | 2974 | 3.44 | 51 (25/23/3) |
|
2305 |
| | train_115 | 115 | 0.12 | 7 (3/4/0) |
|
2306 |
+
| tr-TR | validation | 2033 | 2.17 | 71 (36/34/1) |
|
2307 |
| | test | 2974 | 3.00 | 42 (17/18/7) |
|
2308 |
| | train_115 | 115 | 0.11 | 6 (3/3/0) |
|
2309 |
+
| vi-VN | validation | 2033 | 2.10 | 28 (13/14/1) |
|
2310 |
| | test | 2974 | 3.23 | 30 (11/14/5) |
|
2311 |
|| train_115 | 115 | 0.11 | 7 (2/4/1) |
|
2312 |
|
|
|
2322 |
```python
|
2323 |
from datasets import load_dataset
|
2324 |
|
2325 |
+
speech_massive_fr_train = load_dataset("FBK-MT/Speech-MASSIVE", "fr-FR", split="train")
|
2326 |
```
|
2327 |
|
2328 |
In case you don't have enough space in the machine, you can stream dataset by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
2329 |
```python
|
2330 |
from datasets import load_dataset
|
2331 |
|
2332 |
+
speech_massive_de_train = load_dataset("FBK-MT/Speech-MASSIVE", "de-DE", split="train", streaming=True)
|
2333 |
list(speech_massive_de_train.take(2))
|
2334 |
```
|
2335 |
|
|
|
2338 |
```python
|
2339 |
from datasets import load_dataset
|
2340 |
|
2341 |
+
speech_massive = load_dataset("FBK-MT/Speech-MASSIVE", "all")
|
2342 |
multilingual_validation = speech_massive['validation']
|
2343 |
```
|
2344 |
|
|
|
2347 |
from datasets import load_dataset, interleave_datasets, concatenate_datasets
|
2348 |
|
2349 |
# creating full train set by interleaving between German and French
|
2350 |
+
speech_massive_de = load_dataset("FBK-MT/Speech-MASSIVE", "de-DE")
|
2351 |
+
speech_massive_fr = load_dataset("FBK-MT/Speech-MASSIVE", "fr-FR")
|
2352 |
speech_massive_train_de_fr = interleave_datasets([speech_massive_de['train'], speech_massive_fr['train']])
|
2353 |
|
2354 |
# creating train_115 few-shot set by concatenating Korean and Russian
|
2355 |
+
speech_massive_ko = load_dataset("FBK-MT/Speech-MASSIVE", "ko-KR")
|
2356 |
+
speech_massive_ru = load_dataset("FBK-MT/Speech-MASSIVE", "ru-RU")
|
2357 |
speech_massive_train_115_ko_ru = concatenate_datasets([speech_massive_ko['train_115'], speech_massive_ru['train_115']])
|
2358 |
```
|
2359 |
|