Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
Catalan
DOI:
Libraries:
Datasets
Dask
License:
File size: 8,673 Bytes
9cbfd24
 
 
 
 
 
 
40223f3
9cbfd24
 
 
40223f3
5837b12
9cbfd24
 
 
5837b12
9cbfd24
 
 
 
 
 
 
 
 
 
 
 
 
7f7c54f
 
9cbfd24
 
7f7c54f
 
 
 
9cbfd24
5837b12
9cbfd24
5837b12
9cbfd24
9aa249c
232c088
9cbfd24
 
 
 
232c088
 
 
 
 
 
7573160
232c088
7573160
232c088
7573160
 
 
 
 
232c088
 
 
9cbfd24
5837b12
232c088
 
 
5837b12
232c088
 
 
5837b12
232c088
 
 
5837b12
232c088
 
9cbfd24
 
 
 
466ffbe
9cbfd24
 
 
 
 
 
 
 
 
 
 
 
 
dcd8fa4
 
9cbfd24
 
 
 
 
dcd8fa4
 
 
 
 
 
 
 
9cbfd24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a6e76e
 
232c088
 
9cbfd24
 
232c088
9cbfd24
 
 
232c088
 
3ec4685
 
 
 
 
 
 
e0ee4da
 
 
3ec4685
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e9cbcea
 
 
 
 
 
 
 
3ec4685
9cbfd24
 
 
232c088
 
 
9cbfd24
5b0e947
9cbfd24
5b0e947
9cbfd24
 
 
 
466ffbe
9cbfd24
 
 
0b4c4b3
9cbfd24
 
 
 
 
 
 
 
 
 
 
6a6e76e
9cbfd24
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- ca
license: cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets: festcat
task_categories:
- text-to-speech
task_ids: []
pretty_name: LaFresCat
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: audio
    dtype: audio
  - name: transcription
    dtype: string
  - name: speaker_id
    dtype: string
  - name: accent
    dtype: string
  splits:
  - name: train
    num_bytes: 586302366.93
    num_examples: 2858
  download_size: 539654839
  dataset_size: 586302366.93
---
# LaFresCat Multiaccent

We present LaFresCat, the first Catalan multiaccented and multispeaker dataset. 

This work is licensed under a [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/deed.en). Commercial use is only possible through licensing by the voice artists. For further information, contact <[email protected]> and <[email protected]>. 

## Dataset Details

### Dataset Description

The audios from this dataset have been created with professional studio recordings by professional voice actors in Lafresca Creative Studio.
We processed the raw audios with the following recipe:
 
- **Trimming:** Long silences from the start and the end of clips have been removed.
  - [py-webrtcvad](https://pypi.org/project/webrtcvad/) -> Python interface to the Voice Activity Detector (VAD) developed by Google for the WebRTC.

- **Resampling:** From 48000 Hz to 22050 Hz, which is the most common sampling rate for training TTS models.

- [SOX](https://github.com/chirlu/sox) ->  SOX is a versatile command-line utility primarily used for converting audio files between different formats.
    
- **Stereo to mono conversion:** The original raw audios were provided in stereo, but for TTS training the audios neede to be in mono. Because of that, we used the libraries librosa and soundfile to make this conversion and posterior check.

- [Librosa](https://librosa.org/) Librosa is a Python library for music and audio analysis, offering tools for feature extraction, audio manipulation, and visualization. 
 
- [Soundfile](https://github.com/bastibe/python-soundfile) SoundFile is a library for reading and writing sound files in various formats, providing a simple interface for audio I/O operations.

In total, there are 4 different accents, with 2 speakers per accent (female and male).
After trimming, accumulates a total of 3,75h (divided by speaker IDs) as follows:

* Balear
  * olga -> 23.5 min
  * quim -> 30.93 min
    
* Central
  * elia -> 33.14 min
  * grau -> 37,86 min
    
* Occidental (North-Western)
  * emma -> 28,67 min 
  * pere -> 25,12 min
    
* Valencia
  * gina -> 22,25 min
  * lluc -> 23,58 min


## Uses

The purpose of this dataset is mainly for training text-to-speech and automatic speech recognition models in Catalan accents.

### Languages

The dataset is in Catalan (`ca-ES`).

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

The dataset consists of a single split, providing audios and transcriptions:
```
DatasetDict({
    train: Dataset({
        features: ['audio', 'transcription', 'speaker_id', 'accent'],
        num_rows: 2858
    })
})
```
Each data point is structured as:
```
>> audio_dataset[0]

{'audio': {'path': 'lafresca_multiaccent/central/grau/grau_220.wav',
  'array': array([0., 0., 0., ..., 0., 0., 0.]),
  'sampling_rate': 22050},
 'transcription': 'Una mica més amunt, un cop passats els blocs de pisos, hi havia una altra casa de dos o tres pisos en la que plantaven floretes petites blanques.',
 'speaker_id': 'grau',
 'accent': 'central'}
```

### Dataset Splits

- <u>```audio (dict)```</u>: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
 
  * path (str): The path to the audio file.
  * array (array): Decoded audio array.
  * sampling_rate (int): Audio sampling rate.


- <u>```transcription (str)```</u>: The sentence the user was prompted to speak.


## Dataset Creation

This dataset has been created by members of the Language Technologies unit from the Life Sciences department of the Barcelona Supercomputing Center,
except the valencian sentences which were created with the support of Cenid, the Digital Intelligence Center of the University of Alicante. 
The voices belong to professional voice actors and they've been recorded in Lafresca Creative Studio.

### Source Data

The data presented in this dataset is the source data.

#### Data Collection and Processing

These are the technical details of the data collection and processing:

* Microphone: Austrian Audio oc818 

* Preamp: Focusrite ISA Two 

* Audio Interface: Antelope Orion 32+ 

* DAW: ProTools 2023.6.0 

Processing:

* Noise Gate: C1 Gate

* Compression BF-76 

* De-Esser Renaissance 

* EQ Maag EQ2 

* EQ FabFilter Pro-Q3 

* Limiter: L1 Ultramaximizer 

Here's the information about the speakers:

| Dialect    | Gender  | County          |
|------------|---------|-----------------|
| Central    | male    | Barcelonès      |
| Central    | female  | Barcelonès      |
| Balear     | female  | Pla de Mallorca |
| Balear     | male    | Llevant         |
| Occidental | male    | Baix Ebre       |
| Occidental | female  | Baix Ebre       |
| Valencian  | female  | Ribera Alta     |
| Valencian  | male    | La Plana Baixa  |


#### Who are the source data producers?

The Language Technologies team from the Life Sciences department at the Barcelona Supercomputing Center developed this dataset. 
It features recordings by professional voice actors made at Lafresca Creative Studio.


### Annotations

In order to check whether or not there were any errors in the transcriptions of the audios, we created a Label Studio space. In that space, we manually listened to subset of the dataset, and compared what we heard with the transcription. If the transcription was mistaken, we corrected it.

#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
The dataset consists of professional voice actors who have recorded their voice. You agree to not attempt to determine the identity of speakers in this dataset.

## Bias, Risks, and Limitations

Training a Text-to-Speech (TTS) model by fine-tuning with a Catalan speaker who speaks a particular dialect presents significant limitations. Mostly, the challenge is in capturing the full range of variability inherent in that accent. Each dialect has its own unique phonetic, intonational, and prosodic characteristics that can vary greatly even within a single linguistic region. Consequently, a TTS model trained on a narrow dialect sample will struggle to generalize across different accents and sub-dialects, leading to reduced accuracy and naturalness. Additionally, achieving a standard representation is exceedingly difficult because linguistic features can differ markedly not only between dialects but also among individual speakers within the same dialect group. These variations encompass subtle nuances in pronunciation, rhythm, and speech patterns that are challenging to standardize in a model trained on a limited dataset.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->


## Citation

**APA:**

## Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/), in addition the Valencian sentences have been created within the framework of the NEL-VIVES project 2022/TL22/00215334.

## Dataset Card Contact
[email protected]