|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
dataset_info: |
|
features: |
|
- name: audio |
|
dtype: audio |
|
- name: transcription |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 5933166725.824 |
|
num_examples: 130634 |
|
download_size: 5547933432 |
|
dataset_size: 5933166725.824 |
|
tags: |
|
- audio |
|
- text-to-speech |
|
- turkish |
|
- synthetic-voice |
|
language: |
|
- tr |
|
task_categories: |
|
- text-to-speech |
|
license: cc-by-nc-4.0 |
|
pretty_name: Turkish Neural Voice Dataset |
|
--- |
|
|
|
# Dataset Card for "turkishneuralvoice" |
|
|
|
## Dataset Overview |
|
|
|
**Dataset Name**: Turkish Neural Voice |
|
|
|
**Description**: This dataset contains Turkish audio samples generated using Microsoft Text to Speech services. The dataset includes audio files and their corresponding transcriptions. |
|
|
|
## Dataset Structure |
|
|
|
**Configs**: |
|
- `default` |
|
|
|
**Data Files**: |
|
- Split: `train` |
|
- Path: `data/train-*` |
|
|
|
**Dataset Info**: |
|
- Features: |
|
- `audio`: Audio file |
|
- `transcription`: Corresponding text transcription |
|
- Splits: |
|
- `train` |
|
- Number of bytes: `5,933,166,725.824` |
|
- Number of examples: `130,634` |
|
- Download Size: `5,547,933,432` bytes |
|
- Dataset Size: `5,933,166,725.824` bytes |
|
|
|
## Usage |
|
|
|
To load this dataset in your Python environment using Hugging Face's `datasets` library, use the following code: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("path/to/dataset/turkishneuralvoice") |