|
--- |
|
license: cc-by-2.0 |
|
task_categories: |
|
- automatic-speech-recognition |
|
- text-to-speech |
|
language: |
|
- cs |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
|
|
# ParCzech4Speech (Unsegmented Variant) |
|
|
|
## Dataset Summary |
|
|
|
**ParCzech4Speech (Unsegmented Variant)** is a large-scale Czech speech dataset derived from parliamentary recordings and official transcripts. |
|
This variant captures **continuous speech segments** without enforcing sentence boundaries, making it well-suited for real-world streaming ASR scenarios |
|
and speech modeling tasks that benefit from natural discourse flow. |
|
|
|
The dataset is created using a combination of WhisperX and Wav2Vec 2.0 models for robust automatic alignment and filtering. |
|
Segments are formed by aggregating consecutive well-aligned words until encountering a speaker change or misalignment. |
|
|
|
The dataset is derived from the [**ParCzech 4.0**](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5360) corpus (official transcripts of parliamentary sessions) and the corresponding [**AudioPSP 24.01**](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5404) audios. |
|
|
|
This dataset includes rich metadata and is released under the permissive **CC-BY**, allowing for commercial and academic use. |
|
|
|
## ๐จ Disclaimer |
|
|
|
โ ๏ธ **Note:** The current release of this dataset is **partial** (~80%) and **does not yet include the full set of segments**. |
|
The **complete version** containing all aligned segments will be made available **soon**. |
|
All summary statistics shown below (e.g. total segment count, duration) are **computed on the complete dataset**, **not** the currently available subset. |
|
|
|
## ๐ Note |
|
|
|
๐ข A **sentence-segmented variant** is also available [ParCzech4Speech (Sentence-Segmented Variant)](https://huggingface.co/datasets/ufal/parczech4speech-segmented), optimized for tasks requiring clean sentence boundaries and stricter control over segment quality. |
|
|
|
|
|
## Data Splits |
|
|
|
| Split | Segments | Hours | Speakers | |
|
|-------|----------|-------|----------| |
|
| Train | 1,311,027 | 2631 | 527 | |
|
| Dev | 20,352 | 43.43 | 30 | |
|
| Test | 9,127 | 21.37 | 30 | |
|
|
|
## Dataset Structure |
|
|
|
Each row represents a continuous speech segment with rich metadata: |
|
|
|
| Column | Description | |
|
|--------|-------------| |
|
| `true_text` | Official transcript (normalized, lowercased, punctuation removed). | |
|
| `rec_text` | Whisper-based ASR output. | |
|
| `speaker` | Speaker ID in the format `Name.DateOfBirth`. | |
|
| `dur` | Duration of the segment in seconds. | |
|
| `vert` | Name of the vertical file from ParCzech 4.0. | |
|
| `n_numbers` | Number of numeric tokens in `true_text`. | |
|
| `n_true_words` | Number of words in the segment. | |
|
| `seg_edit_dist` | Normalized Levenshtein distance between `true_text` and `rec_text`. | |
|
| `align_edit_dist_max` | Maximum edit distance among aligned word pairs. | |
|
| `true_char_avg_dur` | **Average per-character duration**, computed at the word level (range 0.035โ1.0s/char). | |
|
| `start_token_id` | Starting token ID from the vertical format. | |
|
| `end_token_id` | Ending token ID. | |
|
| `wav2vec_rec` | Wav2Vec 2.0 decoded transcript used for verification. | |
|
| `wav2vec_rec_edit_dist` | Normalized edit distance between `wav2vec_rec` and `rec_text`. | |
|
|
|
|
|
## Citation |
|
|
|
Please cite the dataset as follows: |
|
|
|
``` |
|
TODO |
|
``` |
|
|