ScreenTalk / README.md
fj11's picture
Upload dataset (part 00003-of-00004) (#26)
787a7bc
---
dataset_info:
features:
- name: audio
dtype: audio
- name: duration
dtype: float64
- name: sentence
dtype: string
- name: uid
dtype: string
- name: group_id
dtype: string
splits:
- name: train
num_bytes: 72075683073.0
num_examples: 232162
- name: valid
num_bytes: 8942996205.0
num_examples: 29025
- name: test
num_bytes: 9073456627.0
num_examples: 29018
download_size: 89128013350
dataset_size: 90092135905.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
size_categories:
- 100K<n<1M
---
# 🏒 **ScreenTalk: Full Dataset**
## πŸ“Œ **Overview**
**ScreenTalk** is a structured transcription dataset designed to improve automatic speech recognition (ASR) and natural language processing (NLP) models. It contains transcriptions from diverse screen content, capturing natural dialogues, different speech styles, and realistic conversational patterns.
The **full version** of ScreenTalk provides access to the complete dataset, covering multiple languages, genres, and a vast range of dialogues. This dataset is suitable for advanced ASR model training, conversational AI, and dialogue-based applications.
---
## πŸ“Š **Dataset Features**
βœ… **Large-scale dataset** with high-quality transcriptions.
βœ… **Diverse speech styles**, including casual, dramatic, and fast-paced dialogues.
βœ… **Pre-processed and structured** for easy integration into ML pipelines.
---
## πŸ“‚ **Dataset Structure**
The dataset is split into:
- **Train(580h):** Majority of the dataset for model training.
- **Validation(75h):** A subset for tuning hyperparameters.
- **Test(75h):** Held-out data for performance evaluation.
### **Data Format**
The dataset is stored in **Parquet format** for efficient access and handling, with the following columns:
| Column | Description |
|-------------|------------|
| `sentence` | Transcribed text |
| `audio` | corresponding audio file |
---
## πŸ“ˆ Whisper Benchmark on ScreenTalk (Test Set)
We evaluated several Whisper models on the ScreenTalk **test set (44.67 hours)** to understand how different model sizes perform on realistic screen dialogues. Below are the **Word Error Rates (WER)** for each model:
| Whisper Model | WER (%) |
|-------------------|-----------|
| `whisper-tiny` | **48.9%** |
| `whisper-base` | **34.8%** |
| _`whisper-small`_ | **20.2%** |
| _`whisper-medium`_ | **17.96%** |
| _`whisper-turbo`_ | **15.65%** |
> πŸ” These results show a clear trend: as model size increases, WER decreases significantly.
Notably, whisper-turbo currently provides the best performance, outperforming tiny, base, small, medium, especially in handling casual, dramatic, and fast-paced dialogues typical in screen content.
---
## πŸ“₯ **Accessing the Full Dataset**
### **Authorization Process**
1. **Click** the payment link below and complete the purchase.
2. **Use the same email as your Hugging Face account** during payment.
3. After payment, click **'Agree and Access Repository'**.
4. **Wait** for verification. Once approved, you will be granted access.
This dataset is available for **premium users**. If you’d like full access, please complete your purchase using the link below:
[πŸ”— **Purchase Full Dataset ($1499/year)**](https://buy.stripe.com/cN2bJE6Zw2kN1JSfZ2)
For any questions, feel free to contact us.
πŸ“§ **Contact:** [fj11](mailto:[email protected])
---
## πŸš€ **Use Cases**
- **ASR Model Training**: Train state-of-the-art speech recognition models.
- **Conversational AI**: Enhance dialogue systems and virtual assistants.
- **Multilingual Speech Processing**: Improve cross-lingual ASR applications.
- **Linguistic Research**: Study spoken language variations across different contexts.
---
## πŸ“œ **License**
The dataset is provided under a **commercial license**. Unauthorized redistribution or public sharing is strictly prohibited.
---
## πŸ“’ **Stay Updated**
We continuously update and expand the dataset with new content and languages. Subscribe to our updates to stay informed!