Datasets:
ufal
/

Modalities:
Audio
Text
Formats:
webdataset
Languages:
Czech
Libraries:
Datasets
WebDataset
License:
File size: 3,443 Bytes
de3015c
 
 
 
 
 
 
 
 
 
 
c6f1b79
 
 
de3015c
 
 
 
 
 
 
 
0007e46
de3015c
0007e46
de3015c
0007e46
 
 
 
de3015c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b6c13b
de3015c
 
 
 
 
 
1b6c13b
de3015c
1acee4d
de3015c
 
 
 
 
e5605c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de3015c
 
 
0007e46
de3015c
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: cc-by-2.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- cs
size_categories:
- 1M<n<10M
---

# ParCzech4Speech (Unsegmented Variant)

## Dataset Summary

**ParCzech4Speech (Unsegmented Variant)** is a large-scale Czech speech dataset derived from parliamentary recordings and official transcripts. 
This variant captures **continuous speech segments** without enforcing sentence boundaries, making it well-suited for real-world streaming ASR scenarios 
and speech modeling tasks that benefit from natural discourse flow.

The dataset is created using a combination of WhisperX and Wav2Vec 2.0 models for robust automatic alignment and filtering. 
Segments are formed by aggregating consecutive well-aligned words until encountering a speaker change or misalignment.

The dataset is derived from the [**ParCzech 4.0**](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5360) corpus (official transcripts of parliamentary sessions) and the corresponding [**AudioPSP 24.01**](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5404) audios.

This dataset includes rich metadata and is released under the permissive **CC-BY**, allowing for commercial and academic use.

## 🔔 Note

📢 A **sentence-segmented variant** is also available [ParCzech4Speech (Sentence-Segmented Variant)](https://huggingface.co/datasets/ufal/parczech4speech-segmented), optimized for tasks requiring clean sentence boundaries and stricter control over segment quality. 


## Data Splits

| Split | Segments | Hours | Speakers |
|-------|----------|-------|----------|
| Train | 1,311,027 | 2631  | 527      |
| Dev   | 20,352    | 43.43 | 30       |
| Test  | 9,127     | 21.37 | 30       |

## Dataset Structure

Each row represents a continuous speech segment with rich metadata:

| Column | Description |
|--------|-------------|
| `true_text` | Unnormalized official transcript. |
| `rec_text` | Whisper-based ASR output. |
| `speaker` | Speaker ID in the format `Name.DateOfBirth`. |
| `dur` | Duration of the segment in seconds. |
| `vert` | Name of the vertical file from ParCzech 4.0. |
| `n_numbers` | Number of numeric tokens in `true_text`. |
| `n_true_words` | Number of words in the segment. |
| `seg_edit_dist` | Levenshtein distance between `true_text` and `rec_text`. |
| `align_edit_dist_max` | Maximum edit distance among aligned word pairs. |
| `true_char_avg_dur` | Average per-character duration, computed at the word level. |
| `start_token_id` | Starting token ID from the vertical format. |
| `end_token_id` | Ending token ID. |
| `wav2vec_rec` | Wav2Vec 2.0 decoded transcript used for verification. |
| `wav2vec_rec_edit_dist` | Normalized edit distance between `wav2vec_rec` and `rec_text`. |

## Available Versions

- **`v1.0-full`** – Full dataset release with all intended data included. (Latest)

- **`v0.1-partial`** – Initial partial upload containing a subset of the training data. *Kept for backward compatibility only. Users should prefer the full version.*
  
By default, `load_dataset()` will download the latest version.  
To explicitly load a specific version, you can specify the `revision` parameter:

```python
from datasets import load_dataset

# Load a specific version of the dataset
dataset = load_dataset("ufal/parczech4speech-unsegmented", revision="v0.1-partial")  # or "v1.0-full"
```


## Citation

Please cite the dataset as follows:

```
TODO
```