Datasets:
ufal
/

Modalities:
Audio
Text
Formats:
webdataset
Languages:
Czech
Libraries:
Datasets
WebDataset
License:
stanvla commited on
Commit
de3015c
·
verified ·
1 Parent(s): 6d18a20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -3
README.md CHANGED
@@ -1,3 +1,68 @@
1
- ---
2
- license: cc-by-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-2.0
3
+ task_categories:
4
+ - automatic-speech-recognition
5
+ - text-to-speech
6
+ language:
7
+ - cs
8
+ size_categories:
9
+ - 1M<n<10M
10
+ ---
11
+
12
+ # Dataset Summary
13
+
14
+ **ParCzech4Speech (Unsegmented Variant)** is a large-scale Czech speech dataset derived from parliamentary recordings and official transcripts.
15
+ This variant captures **continuous speech segments** without enforcing sentence boundaries, making it well-suited for real-world streaming ASR scenarios
16
+ and speech modeling tasks that benefit from natural discourse flow.
17
+
18
+ The dataset is created using a combination of WhisperX and Wav2Vec 2.0 models for robust automatic alignment and filtering.
19
+ Segments are formed by aggregating consecutive well-aligned words until encountering a speaker change or misalignment.
20
+
21
+ Like other ParCzech4Speech variants, this dataset includes rich metadata and is released under the permissive **CC-BY**, allowing for commercial and academic use.
22
+
23
+ 👉 A **sentence-segmented variant** is also available (**ParCzech4Speech Sentence-Segmented**), optimized for tasks requiring clean sentence boundaries and stricter control over segment quality.
24
+ Users are encouraged to choose the variant that best fits their use case.
25
+
26
+ ## 🚨 Disclaimer
27
+
28
+ ⚠️ **Note:** The current release of this dataset is **partial** (~80%) and **does not yet include the full set of segments**.
29
+ The **complete version** containing all aligned segments will be made available **soon**.
30
+ All summary statistics shown below (e.g. total segment count, duration) are **computed on the complete dataset**, **not** the currently available subset.
31
+
32
+
33
+ ## Data Splits
34
+
35
+ | Split | Segments | Hours | Speakers |
36
+ |-------|----------|-------|----------|
37
+ | Train | 1,311,027 | 2631 | 527 |
38
+ | Dev | 20,352 | 43.43 | 30 |
39
+ | Test | 9,127 | 21.37 | 30 |
40
+
41
+ ## Dataset Structure
42
+
43
+ Each row represents a continuous speech segment with rich metadata:
44
+
45
+ | Column | Description |
46
+ |--------|-------------|
47
+ | `true_text` | Official transcript (normalized, lowercased, punctuation removed). |
48
+ | `rec_text` | Whisper-based ASR output. |
49
+ | `speaker` | Speaker ID in the format `Name.DateOfBirth`. |
50
+ | `dur` | Duration of the segment in seconds. |
51
+ | `vert` | Name of the vertical file from ParCzech 4.0. |
52
+ | `n_numbers` | Number of numeric tokens in `true_text`. |
53
+ | `n_true_words` | Number of words in the segment. |
54
+ | `seg_edit_dist` | Normalized Levenshtein distance between `true_text` and `rec_text`. |
55
+ | `align_edit_dist_max` | Maximum edit distance among aligned word pairs. |
56
+ | `true_char_avg_dur` | **Average per-character duration**, computed at the word level (range 0.035–1.0s/char). |
57
+ | `start_token_id` | Starting token ID from the vertical format. |
58
+ | `end_token_id` | Ending token ID. |
59
+ | `wav2vec_rec` | Wav2Vec 2.0 decoded transcript used for verification. |
60
+ | `wav2vec_rec_edit_dist` | Normalized edit distance between `wav2vec_rec` and `rec_text`. |
61
+
62
+
63
+ ## Citation
64
+
65
+
66
+ ```
67
+ TODO
68
+ ```