sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
b223aafe94011d352a2acc4b654a79eff9e01c54 |
# Dataset Card for "huggingartists/tiamat"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.115111 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9ca13ed308504f6f9ac7c3cabdb54138.556x556x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tiamat">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tiamat</div>
<a href="https://genius.com/artists/tiamat">
<div style="text-align: center; font-size: 14px;">@tiamat</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tiamat).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tiamat")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|122| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tiamat")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tiamat | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:11+00:00 |
369b30695c3d88e58defe2ef4eee0948473f6b96 |
# Dataset Card for "huggingartists/till-lindemann"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.275488 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/48d6ca7ca17a9dfc9ad3034e71533a89.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/till-lindemann">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Till Lindemann</div>
<a href="https://genius.com/artists/till-lindemann">
<div style="text-align: center; font-size: 14px;">@till-lindemann</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/till-lindemann).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/till-lindemann")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|257| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/till-lindemann")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/till-lindemann | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:17+00:00 |
62caa1bcf2bcec4f7cb1de165c5aa48b339cba2b |
# Dataset Card for "huggingartists/tom-waits"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.818237 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/505d2d5d1d43304dca446fd2e788a0f8.750x750x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tom-waits">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Waits</div>
<a href="https://genius.com/artists/tom-waits">
<div style="text-align: center; font-size: 14px;">@tom-waits</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tom-waits).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tom-waits")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|681| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tom-waits")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tom-waits | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:23+00:00 |
c10864ba74a9bb913109f581389465e83c5f65c9 |
# Dataset Card for "huggingartists/tony-raut-and-garry-topor"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.083901 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/7249d6785a5c87095850bd4048595e08.989x989x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tony-raut-and-garry-topor">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Тони Раут (Tony Raut) & Гарри Топор (Garry Topor)</div>
<a href="https://genius.com/artists/tony-raut-and-garry-topor">
<div style="text-align: center; font-size: 14px;">@tony-raut-and-garry-topor</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tony-raut-and-garry-topor).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tony-raut-and-garry-topor")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|15| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tony-raut-and-garry-topor")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tony-raut-and-garry-topor | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:30+00:00 |
c0103cdcd1b861af476f97658c2ef45f3808d11d |
# Dataset Card for "huggingartists/tool"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.129846 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/acf1d51a2d729391074dc51a6dd26857.1000x1000x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tool">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tool</div>
<a href="https://genius.com/artists/tool">
<div style="text-align: center; font-size: 14px;">@tool</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tool).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tool")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|101| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tool")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tool | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:37+00:00 |
bc0801036b315bc1b8579b85d15cce22f8b200be |
# Dataset Card for "huggingartists/totpoc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.245029 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/ea3dc2eb7b35254ae3764df28bc02500.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/totpoc">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">totpoc</div>
<a href="https://genius.com/artists/totpoc">
<div style="text-align: center; font-size: 14px;">@totpoc</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/totpoc).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/totpoc")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|78| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/totpoc")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/totpoc | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:43+00:00 |
eda94b4f78a4516e3631cf7f7097d18287faf846 |
# Dataset Card for "huggingartists/travis-scott"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.483549 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5d19fecdb3828ca9ec89dda588e2eb7d.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/travis-scott">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Travis Scott</div>
<a href="https://genius.com/artists/travis-scott">
<div style="text-align: center; font-size: 14px;">@travis-scott</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/travis-scott).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/travis-scott")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|761| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/travis-scott")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/travis-scott | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:52+00:00 |
410c50f6be99e763e4ef6d3b5fc100d01e9a791f |
# Dataset Card for "huggingartists/twenty-one-pilots"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.348302 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5ab9e38cf86aa170734fea1731610abc.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/twenty-one-pilots">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">twenty one pilots</div>
<a href="https://genius.com/artists/twenty-one-pilots">
<div style="text-align: center; font-size: 14px;">@twenty-one-pilots</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/twenty-one-pilots).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/twenty-one-pilots")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|197| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/twenty-one-pilots")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/twenty-one-pilots | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:48:59+00:00 |
3ba285955ee7ad2da527e35e157b5a9c4ff8956a |
# Dataset Card for "huggingartists/tyler-the-creator"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.072102 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/80c9c64ebed6a29681aaeaebe57edf91.984x984x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tyler-the-creator">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tyler, The Creator</div>
<a href="https://genius.com/artists/tyler-the-creator">
<div style="text-align: center; font-size: 14px;">@tyler-the-creator</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tyler-the-creator).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tyler-the-creator")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|529| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tyler-the-creator")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/tyler-the-creator | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:07+00:00 |
50c4e213e693e863926bd6a64ca4a586cdba72e1 |
# Dataset Card for "huggingartists/upsahl"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.168635 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e0fa9b5bdd037ab75031dd7372d05cd6.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/upsahl">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">UPSAHL</div>
<a href="https://genius.com/artists/upsahl">
<div style="text-align: center; font-size: 14px;">@upsahl</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/upsahl).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/upsahl")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|107| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/upsahl")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/upsahl | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:14+00:00 |
e812b071037355403ccd1af6cc226d8ffa1a24ac |
# Dataset Card for "huggingartists/v-x-v-prince"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.198634 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/08ad78acc3e91c45a426390e7524d4e9.853x853x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/v-x-v-prince">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">V $ X V PRiNCE</div>
<a href="https://genius.com/artists/v-x-v-prince">
<div style="text-align: center; font-size: 14px;">@v-x-v-prince</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/v-x-v-prince).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/v-x-v-prince")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|77| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/v-x-v-prince")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/v-x-v-prince | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:21+00:00 |
95597ca32e7acb3d8193c94ea6b89af1b3461804 |
# Dataset Card for "huggingartists/van-morrison"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.062718 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2f97270cc1d1420867052a6c331d5820.1000x667x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/van-morrison">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Van Morrison</div>
<a href="https://genius.com/artists/van-morrison">
<div style="text-align: center; font-size: 14px;">@van-morrison</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/van-morrison).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/van-morrison")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|929| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/van-morrison")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/van-morrison | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:27+00:00 |
2d1205d558be00c0ab98f722274d244257ba043c |
# Dataset Card for "huggingartists/veggietales"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.220878 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d14c9e27b39f0e250784a2dce037a03d.720x720x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/veggietales">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">VeggieTales</div>
<a href="https://genius.com/artists/veggietales">
<div style="text-align: center; font-size: 14px;">@veggietales</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/veggietales).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/veggietales")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|163| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/veggietales")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/veggietales | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:47+00:00 |
5fcfb0cc0a17e04c2ae691b2323671738faba472 |
# Dataset Card for "huggingartists/viktor-tsoi"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.189002 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f9d03b2a6c45897724e74fab6a1aa86c.500x500x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/viktor-tsoi">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Виктор Цой (Viktor Tsoi)</div>
<a href="https://genius.com/artists/viktor-tsoi">
<div style="text-align: center; font-size: 14px;">@viktor-tsoi</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/viktor-tsoi).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/viktor-tsoi")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|118| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/viktor-tsoi")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/viktor-tsoi | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:49:55+00:00 |
0621a2c597f9cca2856ff3f8d8447bc699b0a0a1 |
# Dataset Card for "huggingartists/vladimir-vysotsky"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.124261 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/18735fe10bace7b3f615b2da9c95ac73.938x938x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/vladimir-vysotsky">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Владимир Высоцкий (Vladimir Vysotsky)</div>
<a href="https://genius.com/artists/vladimir-vysotsky">
<div style="text-align: center; font-size: 14px;">@vladimir-vysotsky</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/vladimir-vysotsky).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/vladimir-vysotsky")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|47| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/vladimir-vysotsky")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/vladimir-vysotsky | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:03+00:00 |
e838ad2ec7f92da7b91da83b7d11034ca392ec70 |
# Dataset Card for "huggingartists/xxxtentacion"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.957186 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f72572986d8187cf35f0fc9f9d06afb2.900x900x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/xxxtentacion">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">XXXTENTACION</div>
<a href="https://genius.com/artists/xxxtentacion">
<div style="text-align: center; font-size: 14px;">@xxxtentacion</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/xxxtentacion).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/xxxtentacion")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|784| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/xxxtentacion")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/xxxtentacion | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:12+00:00 |
7329e2519ead1c6913950650d664f0eb4625d120 |
# Dataset Card for "huggingartists/young-thug"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 4.254273 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b08755976e2dcad78a75ee47059adcbc.777x777x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/young-thug">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Young Thug</div>
<a href="https://genius.com/artists/young-thug">
<div style="text-align: center; font-size: 14px;">@young-thug</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/young-thug).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/young-thug")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|1656| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/young-thug")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/young-thug | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:19+00:00 |
d6f4228f2d105c97fb885fe986b22262621f70ce |
# Dataset Card for "huggingartists/yung-lean"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.441891 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/8c898f8c39dbd271b3ccfd5303d423c7.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/yung-lean">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Yung Lean</div>
<a href="https://genius.com/artists/yung-lean">
<div style="text-align: center; font-size: 14px;">@yung-lean</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/yung-lean).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/yung-lean")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|269| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/yung-lean")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/yung-lean | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:26+00:00 |
82eb5c29ea8102255af47cee348ce27c24646743 |
# Dataset Card for "huggingartists/yung-plague"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.109415 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/6c0f8e02f467c694379f242ea2897efd.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/yung-plague">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Yung Plague</div>
<a href="https://genius.com/artists/yung-plague">
<div style="text-align: center; font-size: 14px;">@yung-plague</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/yung-plague).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/yung-plague")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|38| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/yung-plague")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/yung-plague | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:33+00:00 |
445f3b4fbf2c6105fb6fb2880cfa44d8ec62c6c5 |
# Dataset Card for "huggingartists/zemfira"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.226796 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/df440220b2dd0a34a119db791da90e59.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/zemfira">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Земфира (Zemfira)</div>
<a href="https://genius.com/artists/zemfira">
<div style="text-align: center; font-size: 14px;">@zemfira</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/zemfira).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/zemfira")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|165| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/zemfira")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/zemfira | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T08:50:39+00:00 |
344cc7bca72dc01a0bf65b62c44e32f62bcb379d | # Data Measurements Tools: pre-computed data
Dataset of pre-computed data measures for:
- 'amazon_polarity'
- 'c4'
- 'glue'
- 'hate_speech18'
- 'hate_speech_offensive'
- 'imdb'
- 'squad'
- 'squad_v2'
- 'super_glue'
- 'wikitext'
| huggingface/DataMeasurementsFiles | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-10-27T18:28:29+00:00 |
0b59c76392fa4f3b2aea8ccc57a30d49dbd86fc6 |
### This dataset contains images used in the documentation of HuggingFace's libraries.
HF Team: Please make sure you optimize the assets before uploading them.
My favorite tool for this is https://tinypng.com/.
| huggingface/documentation-images | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2024-02-16T11:49:37+00:00 |
9462154cba99c3c7f569d3b4f1ba26614afd558c | This repository contains the mapping from integer id's to actual label names (in HuggingFace Transformers typically called `id2label`) for several datasets.
Current datasets include:
- ImageNet-1k
- ImageNet-22k (also called ImageNet-21k as there are 21,843 classes)
- COCO detection 2017
- COCO panoptic 2017
- ADE20k (actually, the [MIT Scene Parsing benchmark](http://sceneparsing.csail.mit.edu/), which is a subset of ADE20k)
- Cityscapes
- VQAv2
- Kinetics-700
- RVL-CDIP
- PASCAL VOC
- Kinetics-400
- ...
You can read in a label file as follows (using the `huggingface_hub` library):
```
from huggingface_hub import hf_hub_download
import json
repo_id = "huggingface/label-files"
filename = "imagenet-22k-id2label.json"
id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r"))
id2label = {int(k):v for k,v in id2label.items()}
```
To add an `id2label` mapping for a new dataset, simply define a Python dictionary, and then save that dictionary as a JSON file, like so:
```
import json
# simple example
id2label = {0: 'cat', 1: 'dog'}
with open('cats-and-dogs-id2label.json', 'w') as fp:
json.dump(id2label, fp)
```
You can then upload it to this repository (assuming you have write access).
| huggingface/label-files | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2023-03-15T06:51:19+00:00 |
12811d9e23b1a4315e119ff014082eab2363ec14 | huggingface-course/documentation-images | [
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "apache-2.0"} | 2023-01-05T02:03:40+00:00 |
|
cba9be1dee92bc1e663bae387587859d02435cdf | d2 | huyongquan/d2 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-10-19T10:37:27+00:00 |
7e9a0fb84fd6c61d81fab5718bdb235f93625600 | This is the same dataset as the question_generator dataset but with the context removed and the question and answer in separate fields. This is intended to be used with the [question_generator](https://github.com/AMontgomerie/question_generator) repo to train the qa_evaluator model which predicts whether a question and answer pair makes sense. | iarfmoose/qa_evaluator | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-29T05:20:10+00:00 |
107f93838cc2fe938b5cc5d21d70f0e288040c60 | This dataset is made up of data taken from SQuAD v2.0, RACE, CoQA, and MSMARCO. Some examples have been filtered out of the original datasets and others have been modified.
There are two fields; question and text. The question field contains the question, and the text field contains both the answer and the context in the following format:
"\<answer> (answer text) \<context> (context text)"
The <answer> and <context> are included as special tokens in the question generator's tokenizer.
This dataset is intended to be used with the [question_generator repo](https://github.com/AMontgomerie/question_generator) to train the question generator model.
| iarfmoose/question_generator | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-29T05:22:03+00:00 |
c9ab866576b08dd92819e413fc0b3853757da304 | # The Unsplash Dataset

The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning.
The Unsplash Dataset is offered in two datasets:
- the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches
- the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches
As the Unsplash library continues to grow, we’ll release updates to the dataset with new fields and new images, with each subsequent release being [semantically versioned](https://semver.org/).
We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can [open an issue](https://github.com/unsplash/datasets/issues/new/choose) to report a problem or to let us know what you would like to see in the next release of the datasets.
For more on the Unsplash Dataset, see [our announcement](https://unsplash.com/blog/the-unsplash-dataset/) and [site](https://unsplash.com/data).
## Download
### Lite Dataset
The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md).
[⬇️ Download the Lite dataset](https://unsplash.com/data/lite/latest) [~650MB compressed, ~1.4GB raw]
### Full Dataset
The Full dataset is available for non-commercial usage and all uses must abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). To access, please go to [unsplash.com/data](https://unsplash.com/data) and request access. The dataset weighs ~20 GB compressed (~43GB raw)).
## Documentation
See the [documentation for a complete list of tables and fields](https://github.com/unsplash/datasets/blob/master/DOCS.md).
## Usage
You can follow these examples to load the dataset in these common formats:
- [Load the dataset in a PostgreSQL database](https://github.com/unsplash/datasets/tree/master/how-to/psql)
- [Load the dataset in a Python environment](https://github.com/unsplash/datasets/tree/master/how-to/python)
- [Submit an example doc](https://github.com/unsplash/datasets/blob/master/how-to/README.md#submit-an-example)
## Share your work
We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.
We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at [[email protected]](mailto:[email protected]).
If you're using the dataset in a research paper, you can attribute the dataset as `Unsplash Lite Dataset 1.2.0` or `Unsplash Full Dataset 1.2.0` and link to the permalink [`unsplash.com/data`](https://unsplash.com/data).
----
The Unsplash Dataset is made available for research purposes. [It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md). To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers).

| image-search-2/unsplash_lite_image_dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-19T12:44:46+00:00 |
a203e97ece3eced76989a77cc5c18d48b937a8cc | # BinhVQ dedup
**Important**: Please install `lm_dataformat` by `pip install lm_dataformat` before using this dataset
## How to use
```python
import datasets
dataset = datasets.load_dataset("imthanhlv/binhvq_dedup")
```
## Dataset information
This dataset was created from `https://github.com/binhvq/news-corpus` dump with date 21/05/2021. I applied some simple preprocessing:
- Using BeautifulSoup to clean content
- Each record is concatenate of (title + "\n" + sapo + "\n" + content)
- Then perform shuffling + split train & validation + deduplicate (exact match using sha256) | imthanhlv/binhvq_dedup | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-01T16:42:00+00:00 |
6b85e353e04d9235d004a7fc2b3357e7f46217bd |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/89efd3a0fa3ead3f0b8e432e8796697a738d4561b24ff91f4fb2cc25d86e9fb0/train/ccef55189b7843d49110228cb0a71bfa115.wav',
'array': array([-0.01217651, -0.04351807, -0.06278992, ..., -0.00018311,
-0.00146484, -0.00349426]),
'sampling_rate': 16000},
'sentence': 'מצד אחד ובתנועה הציונית הצעירה'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 20306 | 5076 |
| hours | 28.88 | 7.23 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_coursera,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Coursera},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera},
}
```
### Contributions
[More Information Needed] | imvladikon/hebrew_speech_coursera | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:he",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["he"], "size_categories": ["1K<n<10K"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6670706136.352, "num_examples": 20306}, {"name": "validation", "num_bytes": 1648062261.28, "num_examples": 5076}], "download_size": 7726933856, "dataset_size": 8318768397.632}} | 2023-05-05T08:05:00+00:00 |
e0e3988bc3c78be1f697b21c8feb5b49d55d9faa |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Hebrew Dataset for ASR
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/8ce7402f6482c6053251d7f3000eec88668c994beb48b7ca7352e77ef810a0b6/train/e429593fede945c185897e378a5839f4198.wav',
'array': array([-0.00265503, -0.0018158 , -0.00149536, ..., -0.00135803,
-0.00231934, -0.00190735]),
'sampling_rate': 16000},
'sentence': 'היא מבינה אותי יותר מכל אחד אחר'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 8000 | 2000 |
| hours | 6.92 | 1.73 |
## Dataset Creation
### Curation Rationale
scraped data from youtube (channel כאן) with removing outliers (by length and ratio between length of the audio and sentences)
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_kan,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Kan},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_kan},
}
```
### Contributions
[More Information Needed] | imvladikon/hebrew_speech_kan | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:he",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["he"], "size_categories": ["1K<n<10K"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1569850175.0, "num_examples": 8000}, {"name": "validation", "num_bytes": 394275049.0, "num_examples": 2000}], "download_size": 1989406585, "dataset_size": 1964125224.0}} | 2023-05-05T08:12:15+00:00 |
3319e7f6e629f7f2dfaa381ef318b95b96399af4 | # Dataset Card
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://zenodo.org/record/2707356](https://zenodo.org/record/2707356)
- **Repository:** [https://github.com/NLPH/knesset-2004-2005](https://github.com/NLPH/knesset-2004-2005)
- **Paper:**
- **Point of Contact:**
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
An example of a sample:
```
{
"text": <text content of given document>,
"path": <file path to docx>
}
```
Dataset usage
Available "kneset16","kneset17","knesset_tagged" configurations
And only train set.
```python
train_ds = load_dataset("imvladikon/knesset_meetings_corpus", "kneset16", split="train")
```
The Knesset Meetings Corpus 2004-2005 is made up of two components:
* Raw texts - 282 files made up of 867,725 lines together. These can be downloaded in two formats:
* As ``doc`` files, encoded using ``windows-1255`` encoding:
* ``kneset16.zip`` - Contains 164 text files made up of 543,228 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/docs/kneset16.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset16.zip?raw=true>`_
* ``kneset17.zip`` - Contains 118 text files made up of 324,497 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/docs/kneset17.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset17.zip?raw=true>`_
* As ``txt`` files, encoded using ``utf8`` encoding:
* ``kneset.tar.gz`` - An archive of all the raw text files, divided into two folders: `[Github mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset.tar.gz>`_
* ``16`` - Contains 164 text files made up of 543,228 lines together.
* ``17`` - Contains 118 text files made up of 324,497 lines together.
* ``knesset_txt_16.tar.gz``- Contains 164 text files made up of 543,228 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/utf8/knesset_txt_16.tar.gz>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/knesset_txt_16.tar.gz?raw=true>`_
* ``knesset_txt_17.zip`` - Contains 118 text files made up of 324,497 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/utf8/knesset_txt_17.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/knesset_txt_17.zip?raw=true>`_
* Tokenized and morphologically tagged texts - Tagged versions exist only for the files in the ``16`` folder. The texts are encoded using `MILA's XML schema for corpora <http://www.mila.cs.technion.ac.il/eng/resources_standards.html>`_. These can be downloaded in two ways:
* ``knesset_tagged_16.tar.gz`` - An archive of all tokenized and tagged files. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/tagged/knesset_tagged_16.tar.gz>`_ `[Archive.org mirror] <https://archive.org/details/knesset_transcripts_2004_2005>`_
Mirrors
-------
This repository is a mirror of this dataset `found on MILA's website <http://www.mila.cs.technion.ac.il/eng/resources_corpora_haknesset.html>`_.
Zenodo mirror: `https://zenodo.org/record/2707356 <https://zenodo.org/record/2707356>`_
License
-------
All Knesset meeting protocols are in the `public domain <https://en.wikipedia.org/wiki/Public_domain>`_ (`רשות הציבור <https://he.wikipedia.org/wiki/%D7%A8%D7%A9%D7%95%D7%AA_%D7%94%D7%A6%D7%99%D7%91%D7%95%D7%A8>`_) by law. These files are thus in the public doamin and do not require any license or public domain dedication to set their status.
.. |DOI| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.2707356.svg
:target: https://doi.org/10.5281/zenodo.2707356
.. |LICENCE| image:: https://github.com/NLPH/knesset-2004-2005/blob/master/public_domain_shield.svg
:target: https://en.wikipedia.org/wiki/Public_domain
.. |PUBDOM| image:: https://github.com/NLPH/knesset-2004-2005/blob/master/public_domain.png
:target: https://en.wikipedia.org/wiki/Public_domain
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is available under the [ Open Data Commons Public Domain Dedication & License 1.0](https://opendatacommons.org/licenses/pddl/).
### Citation Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Contributions
| imvladikon/knesset_meetings_corpus | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:he",
"license:pddl",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["he"], "license": ["pddl"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Knesset Meetings Corpus"} | 2022-10-23T10:45:02+00:00 |
f2a2a1344cd41ec9574181b324f4d800061cb05a |
# Dataset of Indonesian Online Newspaper
This is a copy of dataset created by **Feryandi Nurdiantoro** (<https://github.com/feryandi/Dataset-Artikel>). The original dataset in json format is stored uncompressed in Google Drive in more than 500K files, one file per article. Unfortunately, due to its size, it is impossible to download the whole dataset as one big compressed file (it takes forever to compress it online). Therefore I provide here a copy and its cleaned version as compressed files.
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. I don't copy it here since it is about 60GB and mostly we only need the text content for NLP research.
Following is the compressed files:
* newspaper-json.gz: the compressed original 500K json files.
* newspaper.txt.gz: a dump of all json files in one big cleaned text file which is normally the only one needed for language model training.
The license has been copied from the source:
## License
Proyek ini dilisensikan dibawah lisensi **Creative Commons Attribution-ShareAlike 4.0 International License**\*. Kumpulan data yang dibagikan bertujuan untuk ilmu pengetahuan, pembelajaran, dan penelitian Bahasa Indonesia (komputasi maupun lingusitik), dan hanya dapat digunakan untuk hal tersebut. Kepemilikan data untuk setiap artikel dimiliki oleh media yang bersangkutan dimana data tersebut diambil; dan pemilik repository ini tidak melakukan klaim kepemilikan atas konten tersebut. Jika Anda mendapati bahwa data ini telah melanggar suatu hak cipta; mohon kontak pengelola repository ini.
This work is licensed under a **Creative Commons Attribution-ShareAlike 4.0 International License**. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer.
| indonesian-nlp/id_newspapers_2018 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Indonesian Newspapers 2018"} | 2022-10-25T12:47:43+00:00 |
30e6fbf9e2fd959a4620116a2868dc98b5db918d | astrophysics
astroparticle
simulation
timeseries
point-cloud
# Dataset Card for FACT Open Crab Sample
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://factdata.app.tu-dortmund.de/
- **Repository:** [Needs More Information]
- **Paper:** https://iopscience.iop.org/article/10.1088/1748-0221/8/06/P06008/pdf, https://iopscience.iop.org/article/10.1088/1748-0221/9/10/P10012/pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a mirror of the Open Crab Sample released by the FACT collaboration, containing simulations of astroparticle events as seen by the FACT telescope from the CORSIKA simulation program, as well as a few nights of observations of the Crab Nebula over 2013 and 2014. The simulation data is in two formats, the photon stream format, as well as a preprocessed version containing extracted features, and cleaned point clouds, which were performed with various levels of DBSCAN. The observations are all the raw data, with no cleaning or extracted features.
### Supported Tasks and Leaderboards
- 'classification': Classification of simulated events as either hadron or gamma events.
- 'regression': Predicting the energy of the initial energy of the simulated events, or where in the night sky the original particle originated
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
The goal of the Open Crab Sample is to open up astroparticle data for exploring different ways of doing analysis.
### Source Data
#### Initial Data Collection and Normalization
The initial simulated data was generated by the CORSIKA simulation program. The observations were taken by the FACT telescope on La Palma between 2013 and 2014. The data is not normalized.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
The simulations were annotated from the ground truth in the simulation, while the observations have no ground truths.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | jacobbieker/open-crab-sample | [
"doi:10.57967/hf/1649",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-11T11:56:00+00:00 |
9083269e47b7faeb22e61eed9f467d9077d72d5e | test | jamol1741/test_dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-10T10:12:46+00:00 |
c381c83a73d2633b8a62a995c93fc9f82bee96d6 | jeffboudier/testing3 | [
"license:afl-3.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "afl-3.0"} | 2022-01-26T01:51:13+00:00 |
|
9a3686ebeddd8751304c63f0be2fa4d28b8b0854 | This is a translated version of SNLI in Dutch. The translation was performed using Google Translate. | jegormeister/dutch-snli | [
"language:nl",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["nl"]} | 2023-10-02T18:06:35+00:00 |
7019f71cc4cdfe11bf8f52f18375bc1b407313ca |
# Dataset Card for "LegalGLUE"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://git.rwth-aachen.de/johanna.frenz/legalglue
### Dataset Summary
The "Legal General Language Understanding Evaluation" (LegalGLUE) dataset was created as part of a bachelor thesis.
It consists of four already existing datasets covering three task types and a total of 23 different languages.
### Supported Tasks
<table>
<tr><td>Dataset</td><td>Source</td><td>Task Type</td><td>Languages</td><tr>
<tr><td>German_LER</td><td> <a href="https://arxiv.org/abs/2003.13016">Leitner et al.</a></td><td>Named Entity Recognition</td><td>German</td></tr>
<tr><td>LeNER_Br</td><td> <a href="https://github.com/peluz/lener-br"> de Araujo et al., 2018</a></td><td>Named Entity Recognition</td><td> Portuguese </td></tr>
<tr><td>SwissJudgmentPrediction</td><td> <a href="https://arxiv.org/abs/2110.00806">Niklaus et al.</a> </td><td>Binary Text Classification</td><td>German, French, Italian</td></tr>
<tr><td>MultEURLEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. </a> </td><td>Multi-label Text Classification</td><td>23 languages (see below)</td></tr>
</table>
### Languages
see Split section
## Dataset Structure
### Data Instances
#### German_LER
German_LER example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'german_ler')
```
```json
{
'id': '66722',
'tokens':['4.', 'Die', 'Kostenentscheidung', 'für', 'das', 'gerichtliche', 'Antragsverfahren', 'beruht', 'auf', '§', '21', 'Abs.', '2', 'Satz', '1', 'i.', 'V.', 'm.', '§', '20', 'Abs.', '1', 'Satz', '1', 'WBO', '.'],
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 38]
}
```
#### LeNER-Br
LeNER-Br example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'lener_br')
```
```json
{
'id': '7826',
'tokens': ['Firmado', 'por', 'assinatura', 'digital', '(', 'MP', '2.200-2/2001', ')', 'JOSÉ', 'ROBERTO', 'FREIRE', 'PIMENTA', 'Ministro', 'Relator', 'fls', '.', 'PROCESSO', 'Nº', 'TST-RR-1603-79.2010.5.20.0001'],
'ner_tags': [0, 0, 0, 0, 0, 9, 10, 0, 3, 4, 4, 4, 0, 0, 0, 0, 11, 12, 12]}
```
#### SwissJudgmentPrediction
swissJudgmentPrediction_de example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'swissJudgmentPrediction_de')
```
```json
{
'id': 48755,
'year': 2014,
'text': "Sachverhalt: A. X._ fuhr am 25. Juli 2012 bei Mülligen mit seinem Personenwagen auf dem zweiten Überholstreifen der Autobahn A1 in Richtung Zürich. Gemäss Anklage schloss er auf einen Lieferwagen auf und schwenkte vom zweiten auf den ersten Überholstreifen aus. Danach fuhr er an zwei Fahrzeugen rechts vorbei und wechselte auf die zweite Überholspur zurück. B. Das Obergericht des Kantons Aargau erklärte X._ am 14. Januar 2014 zweitinstanzlich der groben Verletzung der Verkehrsregeln schuldig. Es bestrafte ihn mit einer bedingten Geldstrafe von 30 Tagessätzen zu Fr. 430.-- und einer Busse von Fr. 3'000.--. C. X._ führt Beschwerde in Strafsachen. Er beantragt, er sei von Schuld und Strafe freizusprechen. Eventualiter sei die Sache an die Vorinstanz zurückzuweisen. ",
'label': 0,
'language': 'de',
'region': 'Northwestern Switzerland',
'canton': 'ag',
'legal area': 'penal law'
}
```
#### MultiEURLEX
Monolingual example out of the MultiEURLEX-Dataset
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_de')
```
```json
{
'celex_id': '32002R0130',
'text': 'Verordnung (EG) Nr. 130/2002 der Kommission\nvom 24. Januar 2002\nbezüglich der im Rahmen der Auss...',
'labels': [3, 17, 5]}
```
Multilingual example out of the MultiEURLEX-Dataset
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_all_languages')
```
```json
{
'celex_id': '32002R0130',
'text': {
'bg': None,
'cs': None,
'da': 'Kommissionens ...',
'de': 'Verordnung ... ',
'el': '...',
'en': '...',
...
},
'labels': [3, 17, 5]
}
```
### Data Fields
#### German_LER
- `id`: id of the sample
- `tokens`: the tokens of the sample text
- `ner_tags`: the NER tags of each token
#### LeNER_Br
- `id`: id of the sample
- `tokens`: the tokens of the sample text
- `ner_tags`: the NER tags of each token
#### SwissJudgmentPrediction
- `id`: (**int**) ID of the document
- `year`: (**int**) the publication year
- `text`: (**str**) the facts of the case
- `label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval)
- `language`: (**str**) one of (de, fr, it)
- `region`: (**str**) the region of the lower court
- `canton`: (**str**) the canton of the lower court
- `legal area`: (**str**) the legal area of the case
#### MultiEURLEX
Monolingual use:
- `celex_id`: (**str**) Official Document ID of the document
- `text`: (**str**) An EU Law
- `labels`: (**List[int]**) List of relevant EUROVOC concepts (labels)
Multilingual use:
- `celex_id`: (**str**) Official Document ID of the document
- `text`: (dict[**str**]) A dictionary with the 23 languages as keys and the corresponding EU Law as values.
- `labels`: (**List[int]**) List of relevant EUROVOC concepts (labels)
The labels lists consists per default of level 1 EUROVOC concepts. Can be changed by adding the label_level parameter when loading the dataset. (available levels: level_1, level_2, level_3, all_levels)
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_de', label_level="level_3")
```
### Data Splits
<table>
<tr><th>Dataset</th><th> Language </th> <th> ISO code </th> <th> Number of Documents train/dev/test </th> </tr>
<tr><td>German-LER</td><td>German</td> <td><b>de</b></td> <td> 66723 / - / - </td> </tr>
<tr><td>LeNER-Br</td><td>Portuguese</td> <td><b>pt</b></td> <td> 7828 / 1177 / 1390 </td> </tr>
<tr><td rowspan="3">SwissJudgmentPrediction</td><td>German</td> <td><b>de</b></td> <td> 35458 / 4705 / 9725 </td> </tr>
<tr><td> French </td><td><b>fr</b></td><td> 21179 / 3095 / 6820 </td> </tr>
<tr><td> Italian </td><td><b>it</b></td><td> 3072 / 408 / 812 </td> </tr>
<tr><td rowspan="23">MultiEURLEX</td><td>English </td> <td><b>en</b></td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> German </td> <td> <b>de</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> French </td> <td> <b>fr</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Italian </td> <td> <b>it</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Spanish </td> <td> <b>es</b> </td> <td> 52,785 / 5,000 / 5,000 </td> </tr>
<tr><td> Polish </td> <td> <b>pl</b> </td> <td> 23,197 / 5,000 / 5,000 </td> </tr>
<tr><td> Romanian </td> <td> <b>ro</b> </td> <td> 15,921 / 5,000 / 5,000 </td> </tr>
<tr><td> Dutch </td> <td> <b>nl</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Greek </td> <td> <b>el</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Hungarian </td> <td> <b>hu</b> </td> <td> 22,664 / 5,000 / 5,000 </td> </tr>
<tr><td> Portuguese </td> <td> <b>pt</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Czech </td> <td> <b>cs</b> </td> <td> 23,187 / 5,000 / 5,000 </td> </tr>
<tr><td> Swedish </td> <td> <b>sv</b> </td> <td> 42,490 / 5,000 / 5,000 </td> </tr>
<tr><td> Bulgarian </td> <td> <b>bg</b> </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Danish </td> <td> <b>da</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Finnish </td> <td> <b>fi</b> </td> <td> 42,497 / 5,000 / 5,000 </td> </tr>
<tr><td> Slovak </td> <td> <b>sk</b> </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Lithuanian </td> <td> <b>lt</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Croatian </td> <td> <b>hr</b> </td> <td> 7,944 / 2,500 / 5,000 </td> </tr>
<tr><td> Slovene </td> <td> <b>sl</b> </td> <td> 23,184 / 5,000 / 5,000 </td> </tr>
<tr><td> Estonian </td> <td> <b>et</b> </td> <td> 23,126 / 5,000 / 5,000 </td> </tr>
<tr><td> Latvian </td> <td> <b>lv</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Maltese </td> <td> <b>mt</b> </td> <td> 17,521 / 5,000 / 5,000 </td> </tr>
</table>
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
| jfrenz/legalglue | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"multilinguality:multilingual",
"source_datasets:extended",
"language:en",
"language:da",
"language:de",
"language:nl",
"language:sv",
"language:bg",
"language:cs",
"language:hr",
"language:pl",
"language:sk",
"language:sl",
"language:es",
"language:fr",
"language:it",
"language:pt",
"language:ro",
"language:et",
"language:fi",
"language:hu",
"language:lt",
"language:lv",
"language:el",
"language:mt",
"german-ler",
"lener-br",
"arxiv:2003.13016",
"arxiv:2110.00806",
"arxiv:2109.00904",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en", "da", "de", "nl", "sv", "bg", "cs", "hr", "pl", "sk", "sl", "es", "fr", "it", "pt", "ro", "et", "fi", "hu", "lt", "lv", "el", "mt"], "multilinguality": ["multilingual"], "source_datasets": ["extended"], "task_categories": ["text-classification", "token-classification"], "task_ids": ["named-entity-recognition", "multi-label-classification", "topic-classification"], "pretty_name": "LegalGLUE", "tags": ["german-ler", "lener-br"]} | 2022-10-22T21:14:36+00:00 |
7cee4936fb208443b00afa753d01c57376496856 |
# SAE-door-abstracts
This dataset includes ~1,550 texts of abstracts of technical papers and journal articles from the SAE Mobilus database that cover the topics of automotive or aerospace doors, noise, acoustics, and vibrations. | jgammack/SAE-door-abstracts | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["sequence-modeling"], "task_ids": ["language-modeling"], "pretty_name": "SAE-door-abstracts", "language_bcp47": ["en-US"]} | 2022-10-22T07:23:24+00:00 |
11e49b7ece33d62afd7f65bc05ce60ad37f9ba7b |
## How to use the data sets
This dataset contains 1.9M unique pairs of protein sequences and ligand SMILES with experimentally determined
binding affinities. It can be used for fine-tuning a language model.
The data comes from the following sources:
- BindingDB
- PDBbind-cn
- BioLIP
- BindingMOAD
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/binding_affinity",split='train[:90%]')
validation = load_dataset("jglaser/binding_affinity",split='train[90%:]')
```
Optionally, datasets with certain protein sequences removed are available.
These can be used to test the predictive power for specific proteins even when
these are not part of the training data.
- `train_no_kras` (no KRAS proteins)
**Loading the data manually**
The file `data/all.parquet` contains the preprocessed data. To extract it,
you need download and install [git LFS support] https://git-lfs.github.com/].
### Pre-process yourself
To manually perform the preprocessing, download the data sets from
1. BindingDB
In `bindingdb`, download the database as tab separated values
<https://bindingdb.org> > Download > BindingDB_All_2021m4.tsv.zip
and extract the zip archive into `bindingdb/data`
Run the steps in `bindingdb.ipynb`
2. PDBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
Perform the steps in the notebook `pdbbind.ipynb`
3. BindingMOAD
Go to <https://bindingmoad.org> and download the files `every.csv`
(All of Binding MOAD, Binding Data) and the non-redundant biounits
(`nr_bind.zip`). Place and extract those files into `binding_moad`.
Run the script `moad.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 moad.py`).
Perform the steps in the notebook `moad.ipynb`
4. BioLIP
Download from <https://zhanglab.ccmb.med.umich.edu/BioLiP/> the files
- receptor1.tar.bz2 (Receptor1, Non-redudant set)
- ligand_2013-03-6.tar.bz2 (Ligands)
- BioLiP.tar.bz2 (Annotations)
and extract them in `biolip/data`.
The following steps are **optional**, they **do not** result in additional binding affinity data.
Download the script
- download_all_sets.pl
from the Weekly update subpage.
Update the 2013 database to its current state
`perl download_all-sets.pl`
Run the script `biolip.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 biolip.py`).
Perform the steps in the notebook `biolip.ipynb`
5. Final concatenation and filtering
Run the steps in the notebook `combine_dbs.ipynb`
| jglaser/binding_affinity | [
"molecules",
"chemistry",
"SMILES",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"tags": ["molecules", "chemistry", "SMILES"]} | 2022-03-12T00:29:11+00:00 |
12ef6ff7249d499ae2255caa3d3d80a1cccb308d |
# Dataset Card for ClarinPL Sejm/Senat Speech Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CLARIN-PL mowa](https://mowa.clarin-pl.eu/)
- **Repository:** [Needs More Information]
- **Paper:** [System for Automatic Transcription of Sessions of the Polish Senate](https://acoustics.ippt.pan.pl/index.php/aa/article/view/327/pdf_32)
- **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A collection of 97 hours of parliamentary speeches published on the ClarinPL website.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Polish.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
An example from the dataset is:
```
{'file': '/root/.cache/huggingface/datasets/downloads/extracted/4143b1d75559b10028c1c7e8800c9ccc05934ca5a8ea15f8f9a92770576a1ee3/SejmSenat/audio/AdamAbramowicz-20130410/file000.wav',
'id': 'AdamAbramowicz-20130410-file000',
'speaker_id': 'AdamAbramowicz',
'text': 'panie marszałku wysoka izbo panie ministrze próbuje się przedstawiać polskę jako zieloną wyspę kraj który się szybko rozwija tymczasem rzeczywistość jest zupełnie inna a widać ją także dzisiaj przed polskim parlamentem próbuje się rząd próbuje zagonić polaków do pracy aż do śmierci przedłużać wiek emerytalny czyliczyli sytuacja gospodarcza polski w tym wypadku jest przedstawiana już zupełnie inaczej pakiet klimatyczny i protokół z kioto jak się zgadzają fachowcy od gospodarki jest szkodliwy dla krajów które są na dorobku a polska właśnie jest takim krajem'}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- text: the transcription of the audio file.
- speaker_id: The ID of the speaker of the audio.
### Data Splits
| | Train | Test |
| ----- | ----- | ---- |
| dataset | 6622 | 130 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
### Contributions
[Needs More Information] | jimregan/clarinpl_sejmsenat | [
"task_categories:other",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:other",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language": ["pl"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["other", "automatic-speech-recognition"], "task_ids": []} | 2023-01-22T13:37:24+00:00 |
f306efab67c654660955f251fa7fa3f7d687cae1 |
# Dataset Card for ClarinPL Studio Speech Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CLARIN-PL mowa](https://mowa.clarin-pl.eu/)
- **Repository:** [Kaldi Baseline](https://github.com/danijel3/ClarinStudioKaldi)
- **Paper:** [Polish Read Speech Corpus for Speech Tools and Services](https://arxiv.org/abs/1706.00245)
- **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information]
- **Point of Contact:** [Danijel Koržinek](https://github.com/danijel3/)
### Dataset Summary
The corpus consists of 317 speakers recorded in 554
sessions, where each session consists of 20 read sentences and 10 phonetically rich words. The size of
the audio portion of the corpus amounts to around 56 hours, with transcriptions containing 356674 words
from a vocabulary of size 46361.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Polish.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
An example from the dataset is:
```
{'file': '/root/.cache/huggingface/datasets/downloads/extracted/333ddc746f2df1e1d19b44986992d4cbe28710fde81d533a220e755ee6c5c519/audio/SES0001/rich001.wav',
'id': 'SES0001_rich001',
'speaker_id': 'SPK0001',
'text': 'drożdże dżip gwożdżenie ozimina wędzarz rdzeń wędzonka ingerować kładzenie jutrzenka'}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- text: the transcription of the audio file.
- speaker_id: The ID of the speaker of the audio.
### Data Splits
| | Train | Test | Valid |
| ----- | ----- | ---- | ----- |
| dataset | 11222 | 1362 | 1229 |
## Dataset Creation
### Curation Rationale
The purpose of this segment of the project was to develop specific tools that would allow for automatic and semi-automatic processing of large quantities of acoustic speech data. Another purpose of the corpus was to serve as a reference for studies in phonetics and pronunciation.
### Source Data
#### Initial Data Collection and Normalization
The corpus was recorded in a studio environment using two microphones: a high-quality studio microphone and a typical consumer audio headset.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[CLARIN PUB+BY+INF+NORED](https://mowa.clarin-pl.eu/korpusy/LICENSE)
### Citation Information
```
@article{korvzinek2017polish,
title={Polish read speech corpus for speech tools and services},
author={Kor{\v{z}}inek, Danijel and Marasek, Krzysztof and Brocki, {\L}ukasz and Wo{\l}k, Krzysztof},
journal={arXiv preprint arXiv:1706.00245},
year={2017}
}
```
### Contributions
[Needs More Information]
| jimregan/clarinpl_studio | [
"task_categories:other",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:other",
"arxiv:1706.00245",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language": ["pl"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["other", "automatic-speech-recognition"], "task_ids": []} | 2023-01-21T12:27:08+00:00 |
2aa1a929e2f3ed32b7012eaa35f7e4cbc2d462a6 |
# Dataset Card for Augmented-GLUE-SST2
Automatically augmented data from train split of SST-2 dataset using conditional text generation approach.
Code used to generate this file will be soon available at https://github.com/IntelLabs/nlp-architect.
| jmamou/augmented-glue-sst2 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:unknown",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en-US"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "extended": ["original"]} | 2022-07-17T11:25:34+00:00 |
0db57b32c35d3fa23ca1a647a102d9722863fbe2 |
# Dataset Card for ICC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [Jón Friðrik Daðason](mailto:[email protected])
### Dataset Summary
The Icelandic Crawled Corpus (ICC) contains approximately 930M tokens which have been scraped from a selection of Icelandic websites, including news sites, government websites and forums. The scraped text is presented in its original form, unannotated, untokenized and without deduplication.
### Supported Tasks and Leaderboards
The ICC is primarily intended for use in training language models. It can be combined with other corpora, such as the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) and the Icelandic portion of the [mC4](https://huggingface.co/datasets/mc4) corpus.
### Languages
This corpus contains text in Icelandic, scraped from a variety of online sources.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each scraped item consists of two fields:
* **url**: The source URL of the scraped text.
* **text**: The scraped text.
### Data Splits
N/A
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Although this corpus consists entirely of text collected from publicly available websites, it may contain some examples of personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This corpus was created by Jón Friðrik Daðason, during work done at the [Language and Voice Lab](https://lvl.ru.is/) at [Reykjavik University](https://www.ru.is/).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0
International License. Any text, HTML page links, information, metadata or
other materials in this work may be subject to separate terms and
conditions between you and the owners of such content.
If you are a copyright owner or an agent thereof and believe that any
content in this work infringes upon your copyrights, you may submit a
notification with the following information:
* Your full name and information reasonably sufficient to permit us to
contact you, such as mailing address, phone number and an email address.
* Identification of the copyrighted work you claim has been infringed.
* Identification of the material you claim is infringing and should be
removed, and information reasonably sufficient to permit us to locate
the material.
### Citation Information
N/A
### Contributions
Thanks to [@jonfd](https://github.com/jonfd) for adding this dataset.
| jonfd/ICC | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["is"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "ICC"} | 2022-10-22T14:15:16+00:00 |
503eee0894f308dbd1d74c1b4ecf4cfc99dd43f9 |
MultiDoGo dialog dataset:
- paper: https://aclanthology.org/D19-1460/
- git repo: https://github.com/awslabs/multi-domain-goal-oriented-dialogues-dataset
*Abstract*
The need for high-quality, large-scale, goal-oriented dialogue datasets continues to grow as virtual assistants become increasingly wide-spread. However, publicly available datasets useful for this area are limited either in their size, linguistic diversity, domain coverage, or annotation granularity. In this paper, we present strategies toward curating and annotating large scale goal oriented dialogue data. We introduce the MultiDoGO dataset to overcome these limitations. With a total of over 81K dialogues harvested across six domains, MultiDoGO is over 8 times the size of MultiWOZ, the other largest comparable dialogue dataset currently available to the public. Over 54K of these harvested conversations are annotated for intent classes and slot labels. We adopt a Wizard-of-Oz approach wherein a crowd-sourced worker (the “customer”) is paired with a trained annotator (the “agent”). The data curation process was controlled via biases to ensure a diversity in dialogue flows following variable dialogue policies. We provide distinct class label tags for agents vs. customer utterances, along with applicable slot labels. We also compare and contrast our strategies on annotation granularity, i.e. turn vs. sentence level. Furthermore, we compare and contrast annotations curated by leveraging professional annotators vs the crowd. We believe our strategies for eliciting and annotating such a dialogue dataset scales across modalities and domains and potentially languages in the future. To demonstrate the efficacy of our devised strategies we establish neural baselines for classification on the agent and customer utterances as well as slot labeling for each domain.
## Licensing information
Community Data License Agreement – Permissive, Version 1.0. | jpcorb20/multidogo | [
"task_categories:text-classification",
"task_categories:other",
"task_ids:intent-classification",
"task_ids:dialogue-modeling",
"task_ids:slot-filling",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10k<n<100k",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10k<n<100k"], "source_datasets": ["original"], "task_categories": ["text-classification", "sequence-modeling", "structure-prediction", "other"], "task_ids": ["intent-classification", "dialogue-modeling", "slot-filling", "named-entity-recognition", "other-other-my-task-description"], "pretty_name": "multidogo"} | 2022-10-20T17:33:00+00:00 |
3883ffebf0733836fbf325f0b5b90648c06a3099 | # The "Crime Facts" of "Offenses of Fraudulence" in Judicial Yuan Verdicts Dataset
This data set is based on the judgments of "Offenses of Fraudulence" cases published by the Judicial Yuan. The data range of the dataset is from January 1, 2011, to December 31, 2021. 74,823 pieces of original data (judgments and rulings) were collected. We only took the contents of the "criminal facts" field of the judgment. This dataset is divided into three parts. The training dataset has 59,858 verdicts, accounting for about 80% of the original data. The remaining 20% is allocated 10% to the verification (7,482 verdicts) and 10% to the test (7,483 verdicts). "Criminal facts" have been Chinese word segmented. If word segmentation is not needed, please merge it yourself.
# 司法院「詐欺罪」判決書「犯罪事實」資料集
本資料集是以司法院公開之「詐欺」案件判決書做成之資料集。資料集之資料範圍從100年1月1日至110年12月31日,所蒐集到的原始資料共有 74823 篇(判決以及裁定),我們只取判決書的「犯罪事實」欄位內容,並把這原始的資料分成三份,用於訓練的資料集有59858篇,約佔原始資料的80%,剩下的20%,則是各分配10%給驗證集(7482篇),10%給測試集(7483篇)。「犯罪事實」已經經過斷詞,如果不需要斷詞,請自行合併。 | jslin09/Fraud_Case_Verdicts | [
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:zh",
"license:apache-2.0",
"legal",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["zh"], "license": "apache-2.0", "size_categories": ["100M<n<1B"], "task_categories": ["text-generation"], "tags": ["legal"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.csv"}, {"split": "test", "path": "test.csv"}, {"split": "validate", "path": "validate.csv"}]}]} | 2024-01-17T09:55:37+00:00 |
b56484636d458e72c094ef81c6e85b3a695ee7e4 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
## Dataset Description
This is translated version of the original CONLL2003 dataset (translated from English to Slovak via Google translate) Annotation was done mostly automatically with word matching scripts. Records where some tags were not matched, were annotated manually (10%) Unlike the original Conll2003 dataset, this one contains only NER tags
- **Point of Contact: [@ju-bezdek](https://github.com/ju-bezdek) **
### Supported Tasks and Leaderboards
NER
labels:
- 0: O
- 1: B-PER
- 2: I-PER
- 3: B-ORG
- 4: I-ORG
- 5: B-LOC
- 6: I-LOC
- 7: B-MISC
- 8: I-MISC
### Languages
sk
## Dataset Structure
### Data Splits
train, test, val
## Dataset Creation
### Source Data
https://huggingface.co/datasets/conll2003
### Annotations
#### Annotation process
- Machine Translation
- Machine pairing tags with reverse translation, and hardcoded rules (including phrase regex matching etc.)
- Manual annotation of records that couldn't be automatically matched
| ju-bezdek/conll2003-SK-NER | [
"task_categories:other",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
"language:sk",
"license:unknown",
"structure-prediction",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["found"], "language": ["sk"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|conll2003"], "task_categories": ["other"], "task_ids": ["named-entity-recognition", "part-of-speech"], "pretty_name": "conll-2003-sk-ner", "tags": ["structure-prediction"]} | 2023-03-21T08:13:05+00:00 |
968eb67fdb0314e80ae9222cd2f60077db7dd4f5 |
## ReactionGIF
> From https://github.com/bshmueli/ReactionGIF

___
## Excerpt from original repo readme
ReactionGIF is a unique, first-of-its-kind dataset of 30K sarcastic tweets and their GIF reactions.
To find out more about ReactionGIF,
check out our ACL 2021 paper:
* Shmueli, Ray and Ku, [Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter](https://arxiv.org/abs/2105.09967)
## Citation
If you use our dataset, kindly cite the paper using the following BibTex entry:
```bibtex
@misc{shmueli2021happy,
title={Happy Dance, Slow Clap: Using Reaction {GIFs} to Predict Induced Affect on {Twitter}},
author={Boaz Shmueli and Soumya Ray and Lun-Wei Ku},
year={2021},
eprint={2105.09967},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| julien-c/reactiongif | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2105.09967",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "paperswithcode_id": "reactiongif"} | 2022-09-20T11:10:26+00:00 |
194343254d70c104a7a923e971c57954316b138e | # AutoNLP Dataset for project: song-lyrics-demo
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project song-lyrics-demo.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 2,
"text": "[Intro: Method Man w/ sample] + (Sunny valentine). We got butter (8X). (The gun'll go the gun'll go.... The gun'll go...). [Raekwon]. Aiyo one thing for sure keep you of all. Keep a nice crib fly away keep to the point. Keep niggaz outta ya face who snakes. Keep bitches in they place keep the mac in a special place. Keep moving for papes keep cool keep doing what you doing. Keep it fly keep me in the crates. Cuz I will erase shit on the real note you'se a waste. It's right here for you I will lace you. Rip you and brace you put a nice W up on ya face. Word to mother you could get chased. It's nothing to taste blood on a thug if he gotta go. All I know is we be giving grace. This is a place from where we make tapes. We make 'em everywhere still in all we be making base. Y'all be making paste these little niggaz they be making shapes. Our shit is art yours is traced. [Chorus: Sunny Valentine]. This is the way that we rolling in the streets. You know when we roll we be packing that heat. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go.... [Method Man]. This is Poverty Island man these animals don't run. Slums where the ambulance don't come. Who got the best base? Fiends waiting to smoke some. Approach something ask him where he getting that coke from. My dudes hug blocks like samurai shogun. Cuz no V and no ones equalling no fun. Who want a treat they know huh? Body to go numb. My woman need funds plus her hair and her toes done. It is what it is though you fuck with the kid flow. That make it hard to get dough the harder to get gold. Harder the piff blow harder when it snow. The pinky and the wrist glow this here what we live for. Get gwop then get low but first thought. We gotta get the work off the gift and the curse boss. Yeah see I'm the shit yo the dirt in the fit no. Hustling from the get-go the motto is get more. [Chorus]. [Masta Killa]. We was quiet flashy brothers strapped all along. With the dirty .38 long twelve hour shift gate. Took case state to state you think he won't hold his weight?. Put ya money on the plate and watch it get scrapped. We get ape up in that club off that juice and Henn. And it's a no win situation fucking with them. You mean like Ewing at the front at the rim finger roll a Dutch. Million dollar stages touched techs gauges bust. Trust no one the lone shogun rugged Timb boot stomper. Damaging lyrical mass destruction launcher. Nothing can calm the quakeage when I break kid. Peace to my brothers up north doing state bids. [Chorus]. [Chorus 2: Sunny Valentine]. Whoa... this is the way we be rolling in the club. You know when we roll we be packing .32 snubs. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. [Outro: sample to fade]. We got butter..."
},
{
"target": 4,
"text": "[Sean Paul:]. Aye. It's Sean Paul 'long side. The mandem called Jay Sean. Fi di gal dem. Tellin' 'em again what we tell 'em. [Jay Sean:]. Pass me a drink to the left yeah. Said her name was Delilah. And I'm like \"you should come my way\". I already surrender. Damn girl that body's fire. You gon' remember my name. (She should give it up definite). You need it. I need it. We can jump in the deep end. I wanna get lost in your love. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go. In the morning we gon' do it again wake up. I'mma do it like we just broke up and made up. Get up on top of me and work up a sweat work up a sweat. See we can do it any type of way that you want. I'm thinking maybe you're the right kind of wrong. I'm saying baby you won't ever forget my love. You need it. I need it. We can jump in the deep end. I wanna get lost in your love. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go. [Sean Paul:]. Girl mi wan' figure hundred hundred and fifty. Love how you move you know that I'm with it. Perfect size I know that you fit it. Just let me hit it you know mi not quit it. Pon di Dl like Cassie and Diddy. Mi na wound a mi watch we like Sin City. Full time mi run da ting mi tall legend. If you don't come gimme dat would I be offended my girl. Come here down wan' see something me want in life and then waste time. A you a mi pree every day baby full time when ya de pon on mi mind. So mi wine if you give it to me baby girl so we can play. Stick to the ting now I am your king my girl this is what we say. [Jay Sean:]. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=6, names=['Dance', 'Heavy Metal', 'Hip Hop', 'Indie', 'Pop', 'Rock'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 48493 |
| valid | 5389 |
| juliensimon/autonlp-data-song-lyrics-demo | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-25T08:50:45+00:00 |
4b0770d80c127db8eb5f8b80784978324c91217f | # AutoNLP Dataset for project: song-lyrics
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project song-lyrics.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 2,
"text": "[Intro: Method Man w/ sample] + (Sunny valentine). We got butter (8X). (The gun'll go the gun'll go.... The gun'll go...). [Raekwon]. Aiyo one thing for sure keep you of all. Keep a nice crib fly away keep to the point. Keep niggaz outta ya face who snakes. Keep bitches in they place keep the mac in a special place. Keep moving for papes keep cool keep doing what you doing. Keep it fly keep me in the crates. Cuz I will erase shit on the real note you'se a waste. It's right here for you I will lace you. Rip you and brace you put a nice W up on ya face. Word to mother you could get chased. It's nothing to taste blood on a thug if he gotta go. All I know is we be giving grace. This is a place from where we make tapes. We make 'em everywhere still in all we be making base. Y'all be making paste these little niggaz they be making shapes. Our shit is art yours is traced. [Chorus: Sunny Valentine]. This is the way that we rolling in the streets. You know when we roll we be packing that heat. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go.... [Method Man]. This is Poverty Island man these animals don't run. Slums where the ambulance don't come. Who got the best base? Fiends waiting to smoke some. Approach something ask him where he getting that coke from. My dudes hug blocks like samurai shogun. Cuz no V and no ones equalling no fun. Who want a treat they know huh? Body to go numb. My woman need funds plus her hair and her toes done. It is what it is though you fuck with the kid flow. That make it hard to get dough the harder to get gold. Harder the piff blow harder when it snow. The pinky and the wrist glow this here what we live for. Get gwop then get low but first thought. We gotta get the work off the gift and the curse boss. Yeah see I'm the shit yo the dirt in the fit no. Hustling from the get-go the motto is get more. [Chorus]. [Masta Killa]. We was quiet flashy brothers strapped all along. With the dirty .38 long twelve hour shift gate. Took case state to state you think he won't hold his weight?. Put ya money on the plate and watch it get scrapped. We get ape up in that club off that juice and Henn. And it's a no win situation fucking with them. You mean like Ewing at the front at the rim finger roll a Dutch. Million dollar stages touched techs gauges bust. Trust no one the lone shogun rugged Timb boot stomper. Damaging lyrical mass destruction launcher. Nothing can calm the quakeage when I break kid. Peace to my brothers up north doing state bids. [Chorus]. [Chorus 2: Sunny Valentine]. Whoa... this is the way we be rolling in the club. You know when we roll we be packing .32 snubs. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. The gun'll go the gun'll go the gun'll go the gun'll go. [Outro: sample to fade]. We got butter..."
},
{
"target": 4,
"text": "[Sean Paul:]. Aye. It's Sean Paul 'long side. The mandem called Jay Sean. Fi di gal dem. Tellin' 'em again what we tell 'em. [Jay Sean:]. Pass me a drink to the left yeah. Said her name was Delilah. And I'm like \"you should come my way\". I already surrender. Damn girl that body's fire. You gon' remember my name. (She should give it up definite). You need it. I need it. We can jump in the deep end. I wanna get lost in your love. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go. In the morning we gon' do it again wake up. I'mma do it like we just broke up and made up. Get up on top of me and work up a sweat work up a sweat. See we can do it any type of way that you want. I'm thinking maybe you're the right kind of wrong. I'm saying baby you won't ever forget my love. You need it. I need it. We can jump in the deep end. I wanna get lost in your love. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go. [Sean Paul:]. Girl mi wan' figure hundred hundred and fifty. Love how you move you know that I'm with it. Perfect size I know that you fit it. Just let me hit it you know mi not quit it. Pon di Dl like Cassie and Diddy. Mi na wound a mi watch we like Sin City. Full time mi run da ting mi tall legend. If you don't come gimme dat would I be offended my girl. Come here down wan' see something me want in life and then waste time. A you a mi pree every day baby full time when ya de pon on mi mind. So mi wine if you give it to me baby girl so we can play. Stick to the ting now I am your king my girl this is what we say. [Jay Sean:]. I just wanna be close to you. (Just wanna I just wanna). And do all the things you want me to. I just wanna be close to you. (I just wanna I just wanna). And show you the way I feel. You make my love go. You make my love go. You make my love go"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=6, names=['Dance', 'Heavy Metal', 'Hip Hop', 'Indie', 'Pop', 'Rock'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 48493 |
| valid | 5389 |
| juliensimon/autonlp-data-song-lyrics | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-25T08:50:51+00:00 |
d7c03d1f921ac85c3731e8ab256889c495bf36aa | # FewGLUE_32dev
This repository contains the FewGLUE_32dev dataset, an extension of the [FewGLUE](https://github.com/timoschick/fewglue), which enables NLU few-shot learning tasks to be benchmarked under a new 32-sample-dev setting. It has been proved in [previous work](https://arxiv.org/abs/2012.15723) that using larger development sets confer a significant advantage beyond few-shot. FewGLUE_32dev is built by adding additional few-shot dev sets with 32 examples randomly selected from the original/unused SuperGLUE training sets.
### Data Format
The data files follow the exact same format as [SuperGLUE task files](https://super.gluebenchmark.com/tasks).
### Structure
For each SuperGLUE task `T`, the directory `FewGLUE_32dev/T` contains the 32-sample-dev file (`dev32.jsonl`), which consists of 32 examples for few-shot validation.
| juny116/few_glue | [
"arxiv:2012.15723",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-13T04:37:37+00:00 |
9a2b5f9fe33bf2ef9cc1b19cdb532574299d6d71 | This dataset was gathered from the [Google Fact Checker API](https://toolbox.google.com/factcheck/explorer), using an automatic web scraper. 10,000 facts were pulled, but for the sake of simplicity, only ones were the ratings were singular words "false" or "true", were kept, which filtered it down to ~3000 fact checks, with about 90% of the facts being false.
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- en-US
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: polifact-covid-fact-checker
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- question-answering
task_ids:
- fact-checking
- multi-label-classification
- sentiment-classification
- closed-domain-qa
- extractive-qa | justinqbui/covid_fact_checked_google_api | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-13T00:51:50+00:00 |
3b6b4bf045e9f17a84c6e8df92cb9d290d36e500 | This dataset was gathered by using an automated web scraper that scraped [polifact covid fact checker](https://www.politifact.com/coronavirus/). This dataset contains three columns, the text, the rating given by polifact (half-true, full-flop, pants-fire, barely-true true, mostly-true, and false), and the adjusted rating.
The adjusted rating was created by mapping the raw rating given by polifact
```
true -> true
mostly-true -> true
half-true -> misleading
barely-true -> misleading
false -> false
pants-fire -> false
full-flop -> false
```
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- en-US
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: polifact-covid-fact-checker
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- question-answering
task_ids:
- fact-checking
- multi-label-classification
- sentiment-classification
- closed-domain-qa
- extractive-qa | justinqbui/covid_fact_checked_polifact | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-13T00:33:36+00:00 |
755454d31bf8cdd1dc7e52e7c63d37d3a33f2069 | Just for test. The copy of the dataset https://www.kaggle.com/dataclusterlabs/domestic-house-windows-dataset | k0t1k/test | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-19T16:45:26+00:00 |
4f51527df44a7f7f915bee494f1129915118d0e1 | # CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
CORD dataset is cloned from [clovaai](https://github.com/clovaai/cord) GitHub repo
- Box coordinates are normalized against image width/height
- Labels with very few occurrences are replaced with O:
```
replacing_labels = ['menu.etc', 'menu.itemsubtotal',
'menu.sub_etc', 'menu.sub_unitprice',
'menu.vatyn', 'void_menu.nm',
'void_menu.price', 'sub_total.othersvc_price']
```
Check for more info [Sparrow](https://github.com/katanaml/sparrow)
## Citation
### CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
```
@article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
```
### Post-OCR parsing: building simple and robust parser via BIO tagging
```
@article{hwang2019post,
title={Post-OCR parsing: building simple and robust parser via BIO tagging},
author={Hwang, Wonseok and Kim, Seonghyeon and Yim, Jinyeong and Seo, Minjoon and Park, Seunghyun and Park, Sungrae and Lee, Junyeop and Lee, Bado and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
``` | katanaml/cord | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-03-06T15:02:45+00:00 |
314b0e9b26b114e9731e645439c75a5e93ca21f6 | https://www.geogebra.org/m/cwcveget
https://www.geogebra.org/m/b8dzxk6z
https://www.geogebra.org/m/nqanttum
https://www.geogebra.org/m/pd3g8a4u
https://www.geogebra.org/m/jw8324jz
https://www.geogebra.org/m/wjbpvz5q
https://www.geogebra.org/m/qm3g3ma6
https://www.geogebra.org/m/sdajgph8
https://www.geogebra.org/m/e3ghhcbf
https://www.geogebra.org/m/msne4bfm
https://www.geogebra.org/m/nmcv2te5
https://www.geogebra.org/m/hguqx6cn
https://www.geogebra.org/m/jnyvpgqu
https://www.geogebra.org/m/syctd97g
https://www.geogebra.org/m/nq9erdby
https://www.geogebra.org/m/au4har8c | katoensp/VR-OP | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-03-30T14:54:47+00:00 |
65abe73d128fe38c1da174718ecef300f8e204c0 | A cleaned version of MC4 dataset for Sinhala, config is a direct adaptation of MC4 original processing script. | keshan/clean-si-mc4 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-14T09:14:11+00:00 |
d8458d504dd9f497ef5a009976c253c97e6270a0 | This data set contains multi-speaker high quality transcribed audio data for Sinhalese. The data set consists of wave files and the transcriptions of the audio files.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in Sri Lanka.
See [LICENCE.txt](https://www.openslr.org/resources/30/LICENSE.txt) file for license information.
If you use this data in publications, please cite it as follows:
```
@inproceedings{Sodimana2018,
author={Keshan Sodimana and Pasindu {De Silva} and Supheakmungkol Sarin and Oddur Kjartansson and Martin Jansche and Knot Pipatsrisawat and Linne Ha},
title={{A Step-by-Step Process for Building TTS Voices Using Open Source Data and Frameworks for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese}},
year=2018,
booktitle={Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages},
pages={66--70},
doi={10.21437/SLTU.2018-14},
url={http://dx.doi.org/10.21437/SLTU.2018-14}
}
``` | keshan/multispeaker-tts-sinhala | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-10-04T14:39:30+00:00 |
a35806ef4a6f4f79a13fc09b82e81a346ff8272f | https://github.com/google-research-datasets/wit
Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset.
WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages.
```
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
``` | keshan/wit-dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-07T17:15:42+00:00 |
d5dd7720cc49cc604a9817302c7175f627406537 | # Models Trained On ManyTypes4TypeScript
- **[CodeBERT]**(https://huggingface.co/kevinjesse/codebert-MT4TS)
- **[GraphCodeBERT]**(https://huggingface.co/kevinjesse/graphcodebert-MT4TS)
- **[CodeBERTa]**(https://huggingface.co/kevinjesse/codeberta-MT4TS)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Dataset:** [https://doi.org/10.5281/zenodo.6387001](https://doi.org/10.5281/zenodo.6387001)
- **PapersWithCode:** [https://paperswithcode.com/sota/type-prediction-on-manytypes4typescript](https://paperswithcode.com/sota/type-prediction-on-manytypes4typescript)
### Dataset Summary
ManyTypes4TypeScript type inference dataset, available at the DOI link below. [](https://doi.org/10.5281/zenodo.6387001)
Given a line of source code, the task is to identify types that correspond with the tokens of code. We treat this as a tagging task similar to NER and POS where the model must predict a structural property of code i.e types. This is a classification task where the labels are the top occurring types in the training dataset. The size type vocabulary can be changed with the scripts found on Github.
### Supported Tasks and Leaderboards
- `multi-class-classification`: The dataset can be used to train a model for predicting types across a sequence.
### Languages
- TypeScript
## Dataset Structure
### Data Instances
An example of 'validation' looks as follows.
```
{
"tokens": ["import", "{", "Component", ",", "ChangeDetectorRef", "}", "from", "'@angular/core'", ";", "import", "{", "Router", "}", "from", "'@angular/router'", ";", "import", "{", "MenuController", "}", "from", "'@ionic/angular'", ";", "import", "{", "Storage", "}", "from", "'@ionic/storage'", ";", "import", "Swiper", "from", "'swiper'", ";", "@", "Component", "(", "{", "selector", ":", "'page-tutorial'", ",", "templateUrl", ":", "'tutorial.html'", ",", "styleUrls", ":", "[", "'./tutorial.scss'", "]", ",", "}", ")", "export", "class", "TutorialPage", "{", "showSkip", "=", "true", ";", "private", "slides", ":", "Swiper", ";", "constructor", "(", "public", "menu", ",", "public", "router", ",", "public", "storage", ",", "private", "cd", ")", "{", "}", "startApp", "(", ")", "{", "this", ".", "router", ".", "navigateByUrl", "(", "'/app/tabs/schedule'", ",", "{", "replaceUrl", ":", "true", "}", ")", ".", "then", "(", "(", ")", "=>", "this", ".", "storage", ".", "set", "(", "'ion_did_tutorial'", ",", "true", ")", ")", ";", "}", "setSwiperInstance", "(", "swiper", ")", "{", "this", ".", "slides", "=", "swiper", ";", "}", "onSlideChangeStart", "(", ")", "{", "this", ".", "showSkip", "=", "!", "this", ".", "slides", ".", "isEnd", ";", "this", ".", "cd", ".", "detectChanges", "(", ")", ";", "}", "ionViewWillEnter", "(", ")", "{", "this", ".", "storage", ".", "get", "(", "'ion_did_tutorial'", ")", ".", "then", "(", "res", "=>", "{", "if", "(", "res", "===", "true", ")", "{", "this", ".", "router", ".", "navigateByUrl", "(", "'/app/tabs/schedule'", ",", "{", "replaceUrl", ":", "true", "}", ")", ";", "}", "}", ")", ";", "this", ".", "menu", ".", "enable", "(", "false", ")", ";", "}", "ionViewDidLeave", "(", ")", "{", "this", ".", "menu", ".", "enable", "(", "true", ")", ";", "}", "}"],
"labels": [null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "MenuController", null, null, "Router", null, null, "Storage", null, null, "ChangeDetectorRef", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "Swiper", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null],
"url": "https://github.com/ionic-team/ionic-conference-app",
"path": "ionic-conference-app/src/app/pages/tutorial/tutorial.ts",
"commit_hash": "34d97d29369377a2f0173a2958de1ee0dadb8a6e",
"file": "tutorial.ts"}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
|field name. | type | description |
|------------|-------------|--------------------------------------------|
|tokens |list[string] | Sequence of tokens (word tokenization) |
|labels |list[string] | A list of corresponding types |
|url |string | Repository URL |
|path |string | Original file path that contains this code |
|commit_hash |string | Commit identifier in the original project |
|file |string | File name |
### Data Splits
| name | train |validation| test |
|---------:|---------:|---------:|--------:|
|projects | 75.00% | 12.5% | 12.5% |
|files | 90.53% | 4.43% | 5.04% |
|sequences | 91.95% | 3.71% | 4.34% |
|types | 95.33% | 2.21% | 2.46% |
##Types by the Numbers
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
Human annotated types in optionally typed languages and the compiler inferred annotations.
#### Annotation process
#### Who are the annotators?
Developers and TypeScript Compiler.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/kevinjesse
### Licensing Information
Creative Commons 4.0 (CC) license
### Citation Information
```
``` | kevinjesse/ManyTypes4TypeScript | [
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:code",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found", "machine-generated"], "language_creators": ["found"], "language": ["code"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": ["type-inference"], "pretty_name": "ManyTypes4TypeScript", "language_details": "TypeScript"} | 2022-10-22T07:35:33+00:00 |
30b869bd3b4e62823247bdda5b1d17b9aa0b47fc | For identifying personifications | kevinlu1248/personificationgen | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-02T04:03:44+00:00 |
b96cea6dd58782036d00b1b878f60470347a77f2 | # AI Stage MRC task
## Version Info
### v4.1.1
- v3.2.3데이터 (train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation
- train_aug_punctuation에 있음
- v4.1.0의 오류 해결
### v4.1.0
- v3.2.2데이터(train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation
- train_data_aug에 있음
- answers 잘못 labeling된 데이터
### v4.0.1
- punctuation추가한 데이터셋, both train and validation
- answers type 정상
### v4.0.0
- punctuation추가한 데이터셋, only train
- answers type 오류
### v3.2.3
- `v3.2.2`에서 잘못된 [ANSWER] 위치 수정
### v3.2.2
- `v3.2.1`에서 special token([TITLE]) 제거
### v3.2.1
- `v3.2.0`에서 special token([ANSWER]) 추가
### v3.2.0
- `v1.3.1`에서 special tokens([TITLE], #) 추가
### v3.1.0
- `v3.0.0`에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v3.0.0
- `v1.0.0`에서 태욱님 answer, sentence split 토큰 추가
### v2.1.1
- `v2.1.0`에서 `v3.2.3`의 augmentation 데이터와 concat
- bt_context_extractive_final폴더에 train, validation사용
### v2.1.0
- extractive모델을 위한 pororo context augmentation
- context내에 answer가 유일한 데이터만 증강, answer위치 조정 완료
- context_bt_for_extracive폴더에 train, validation 추가
### v2.0.1
- `v2.0.0`에서 context내 answer가 손상된 데이터 제거
### v2.0.0
- 채은님 context backtranslation 추가한 데이터셋
### v1.6.4
- `v1.6.3`에서 `train_dataset_curri` 폴더 내 구성 변경
- `train_level_1` & `train_level_2`-> `train_level_1`
- `train_level_3` -> `train_level_2`
- `train_level_#`을 모두 합친 `train_total`
### v1.6.3
- `v1.6.2`에서 `train_dataset_curri` 폴더 추가, 샘플별 스코어링하여 `level0` ~ `level3`으로 구성
- 사용 데이터셋 : `train`, `train_perm01`, `train_perm02`, `train_perm04`, `train_mask_2`, `train_hard_mask`, `pororo_aug_ver2_len_context_easy`, `pororo_aug_ver2_len_context_normal`, `pororo_aug_ver2_len_context_hard`
### v1.6.2
- `v1.6.1`에서 `train_dataset` 폴더에 `train_mask_2`, `train_hard_mask` 데이터 추가
### v1.6.1
- `v1.6.0`에서 `train_dataset` 폴더에
- `train_pororo_aug_ver2`의 context length 기준으로 curriculum-learning 데이터셋 추가
- easy : `len < 673`
- normal : `673 <= len < 935`
- hard : `935 <= len`
- `v1.4.1`의 업데이트 데이터셋 반영
### v1.6.0
- `v1.3.2`에서 `train_dataset` 폴더에 permutation ratio 0.1, 0.2, 0.4의 sentence permutation 데이터 추가
### v1.5.0
- `v1.4.1`에서 헷갈리는 단어, 날짜 정보 Masking datsets 추가
### v1.4.4
- `v1.4.1`에서 증강 데이터(train, valid pororo ver1 포함) concat, shuffled concat 추가
### v1.4.3
- `v1.4.1`에서 증강 데이터(train, valid pororo ver1 제외) concat, shuffled concat 추가
### v1.4.2
- `v1.4.1`에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v1.4.1
- 대웅님께서 공유해주신 질문유형을 반영하여 기존의 질문을 7개에서 45개로 늘려 pororo aug 적용하여 pororo aug ver2 추가
### v1.3.2
- 'v1.3.1'에 'train_dataset_aeda'에 preprocessing 이 누락 되어 처리
### v1.3.1
- `v.1.3.0`에 `train_dataset_aug` 폴더 추가(question에 대한 조사 제거, Back Translation, AEDA, pororo aug ver1을 concatenate함)
### v1.3.0
- `v1.2.0`에서 `wiki_documents.json`을 pororo aug를 활용해 50,531건의 증강 데이터 추가
### v1.2.0
- `v1.1.0`에서 question에 대한 조사 제거, Back Translation, AEDA Augmentation 추가(pororo aug엔 적용하지 않음)
### v1.1.0
- `v1.0.0`에서 Question 뒤에 NER 모델로 찾은 Entity 단어 추가
### v1.0.0
- `v0.1.1`에서 context에 전처리
### v0.2.2
- `train_dataset`의 `train`, `validation` 셋에서 문제 및 정답오류 수정
### v0.2.1
- `train_pororo_aug`, `validation_pororo_aug`에도 동일한 summary 추가
- `context_bullet`에서 발견된 오류 수정(`context`와 관련 없는 문장이 생성되는 오류)
### v0.2.0
- 대웅님 pororo context summary 추가한 데이터셋
### v0.1.1
- 영재님 pororo augmenation 추가한 데이터셋
- `train_dataset`의 `train`, `validation` 셋에서 문제 및 정답오류 수정
### v0.1.0
- 영재님 pororo augmenation 추가한 데이터셋
### v0.0.0
- 대회에서 제공해주신 기본 데이터셋
## LICENSE
- CC-BY-2.0
- 모든 저작권은 AI Stage에게 있습니다!
- https://stages.ai/
| kiyoung2/aistage-mrc | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-04T06:32:08+00:00 |
c62321036e5647db5767ecaff139912b554dc938 |
# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization
Authors: [Wojciech Kryściński](https://twitter.com/iam_wkr), [Nazneen Rajani](https://twitter.com/nazneenrajani), [Divyansh Agarwal](https://twitter.com/jigsaw2212), [Caiming Xiong](https://twitter.com/caimingxiong), [Dragomir Radev](http://www.cs.yale.edu/homes/radev/)
## Introduction
The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases.
While relevant, such datasets will offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.
Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.
The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
## Links
- [paper](https://arxiv.org/abs/2105.08209) by SalesForce Research
- [GitHub repo](https://github.com/salesforce/booksum)
<p align="center"><img src="misc/book_sumv4.png"></p>
## Table of Contents
1. [Citation](#citation)
2. [Legal Note](#legal-note)
3. [License](#license)
## Citation
```
@article{kryscinski2021booksum,
title={BookSum: A Collection of Datasets for Long-form Narrative Summarization},
author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev},
year={2021},
eprint={2105.08209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Legal Note
By downloading or using the resources, including any code or scripts, shared in this code
repository, you hereby agree to the following terms, and your use of the resources is conditioned
on and subject to these terms.
1. You may only use the scripts shared in this code repository for research purposes. You
may not use or allow others to use the scripts for any other purposes and other uses are
expressly prohibited.
2. You will comply with all terms and conditions, and are responsible for obtaining all
rights, related to the services you access and the data you collect.
3. We do not make any representations or warranties whatsoever regarding the sources from
which data is collected. Furthermore, we are not liable for any damage, loss or expense of
any kind arising from or relating to your use of the resources shared in this code
repository or the data collected, regardless of whether such liability is based in tort,
contract or otherwise.
## License
The code is released under the **BSD-3 License** (see `LICENSE.txt` for details). | kmfoda/booksum | [
"license:bsd-3-clause",
"arxiv:2105.08209",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": ["bsd-3-clause"], "train-eval-index": [{"config": "kmfoda--booksum", "task": "summarization", "task_id": "summarization", "splits": {"eval_split": "test"}, "col_mapping": {"chapter": "text", "summary_text": "target"}}]} | 2022-11-30T12:03:43+00:00 |
fb7b55ab3e4cfaab691a7f33316421799e1cc2ef | Wikigold with IOB tags
| knilakshan20/wikigold | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-09T18:45:00+00:00 |
90cc464bf21bd49eba977b2b7a56590038c1c19a | # Beethoven Sonatas Dataset
Beethoven is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.
The dataset was originally introduced in the SampleRNN paper by Mehri et al. (2017) and download details from the original paper can be found at https://github.com/soroushmehr/sampleRNN_ICLR2017/tree/master/datasets/music. Here, we provide a more convenient download of a processed version of the dataset in order to standardize future use.
We include two versions of the dataset:
- `beethoven.zip` is a zip file containing 4328 8-second audio clips sampled at 16kHz. These were generated by first joining all the piano sonatas, and then splitting the track into 8-second chunks. This data can also be used with the https://github.com/HazyResearch/state-spaces repository to reproduce SaShiMi results, and was the dataset used in the paper.
- `beethoven_raw.zip` contains the raw audio tracks, sampled at 16kHz.
We recommend (and follow) the following train-validation-test split for the audio files in `beethoven.zip` (we attempted to recreate the splits from the SampleRNN work as closely as possible):
- `0.wav` to `3807.wav` for training
- `3808.wav` to `4067.wav` for validation
- `4068.wav` to `4327.wav` for testing
You can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:
```
@article{goel2022sashimi,
title={It's Raw! Audio Generation with State-Space Models},
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
journal={arXiv preprint arXiv:2202.09729},
year={2022}
}
@inproceedings{mehri2017samplernn,
title={SampleRNN: An Unconditional End-to-End Neural Audio Generation Model},
author={Mehri, Soroush and Kumar, Kundan and Gulrajani, Ishaan and Kumar, Rithesh and Jain, Shubham and Sotelo, Jose and Courville, Aaron and Bengio, Yoshua},
booktitle={International Conference on Learning Representations},
year={2017}
}
``` | krandiash/beethoven | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-22T03:25:50+00:00 |
fe62f33d2af5db6f01e504ec1f360da7df9692e8 | # SC09 Dataset
SC09 is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It was previously used as a challenging problem for unconditional audio generation by Donahue et al. (2019), and was originally introduced as a dataset for keyword spotting by Warden (2018). The SC09 dataset consists of 1s clips of utterances of the digits zero through nine across a variety of speakers, with diverse accents and noise conditions.
We include an `sc09.zip` file that contains:
- folders `zero` through `nine`, each containing audio files sampled at 16kHz corresponding to utterances for the digit
- `validation_list.txt` containing the list of validation utterances
- `testing_list.txt` containing the list of testing utterances
- the original `LICENSE` file
We split the data into train-val-test for training SaShiMi models and baselines by following the splits provided in `validation_list.txt` and `testing_list.txt`.
We also include a `sc09_quantized.zip` file, which contains examples that were used in our MTurk study (details of which can be found in the SaShiMi paper). In particular, we take 50 random examples from each digit class and run each through a round of mu-law quantization followed by dequantization. This mimics the quantization noise that is experienced by samples generated by autoregressive models that are trained with mu-law quantization.
You can use the following BibTeX entries to appropriately cite prior work related to this dataset if you decide to use this in your research:
```
@article{goel2022sashimi,
title={It's Raw! Audio Generation with State-Space Models},
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
journal={arXiv preprint arXiv:2202.09729},
year={2022}
}
@inproceedings{donahue2019adversarial,
title={Adversarial Audio Synthesis},
author={Donahue, Chris and McAuley, Julian and Puckette, Miller},
booktitle={International Conference on Learning Representations},
year={2019}
}
@article{Warden2018SpeechCA,
title={Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition},
author={Pete Warden},
journal={ArXiv},
year={2018},
volume={abs/1804.03209}
}
``` | krandiash/sc09 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-22T03:26:11+00:00 |
f1f42e8f692f0b5352c07efb93091d6a2453e2b0 | # YouTubeMix Dataset
YouTubeMix is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It has been used primarily as a source of single instrument piano music for training music generation models at a small scale.
The dataset uses the audio track from https://www.youtube.com/watch?v=EhO_MrRfftU,
and was originally used in the SampleRNN GitHub repository from the Deep Sound Project (https://github.com/deepsound-project/samplernn-pytorch).
_Please note that download and use of this data should be for academic and research purposes only, in order to constitute fair use under US copyright law. We take no responsibility for any copyright infringements that take place by users who download and use this data._
We include two versions of the dataset:
- `youtubemix.zip` is a zip file containing 241 1-minute audio clips (re)sampled at 16kHz. These were generated by splitting the original audio track. This is provided for use with the https://github.com/HazyResearch/state-spaces repository to reproduce SaShiMi results, and was the dataset used in the paper.
- `raw.wav` is the raw audio track from the YouTube video, sampled at 44.1kHz.
We recommend (and follow) the following train-validation-test split for the audio files in `youtubemix.zip`:
- `out000.wav` to `out211.wav` for training
- `out212.wav` to `out225.wav` for validation
- `out226.wav` to `out240.wav` for testing
You can use the following BibTeX entries to appropriately cite prior work if you decide to use this in your research:
```
@article{goel2022sashimi,
title={It's Raw! Audio Generation with State-Space Models},
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
journal={arXiv preprint arXiv:2202.09729},
year={2022}
}
@misc{deepsound,
author = {DeepSound},
title = {SampleRNN},
year = {2017},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/deepsound-project/samplernn-pytorch}},
}
``` | krandiash/youtubemix | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-22T03:26:01+00:00 |
c03311f799a8599b310cf2a5f43ee8a1f86cfd1f |
# Dataset Card for kudo-research/mustc-en-es-text-only
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ict.fbk.eu/must-c-release-v1-2/](https://ict.fbk.eu/must-c-release-v1-2/)
- **Repository:** n/a
- **Paper:** [MuST-C: A multilingual corpus for end-to-end speech translation](https://www.sciencedirect.com/science/article/abs/pii/S0885230820300887)
- **Leaderboard:** n/a
- **Point of Contact:** Roldano Cattoni <[email protected]>; Marco Turchi <[email protected]>
### Dataset Summary
This dataset is a selection of text only (English-Spanish) from the MuST-C corpus.
MuST-C is a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 14 languages (Arabic, Chinese, Czech, Dutch, French, German, Italian, Persian, Portuguese, Romanian, Russian, Spanish, Turkish and Vietnamese).
For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for machine-translation.
[More Information Needed]
### Languages
- en-US
- es-ES
## Dataset Structure
### Data Instances
Dataset example:
```
{
"translation": {
"en": "I'll tell you one quick story to illustrate what that's been like for me.",
"es": "Les diré una rápida historia para ilustrar lo que ha sido para mí."
}
}
```
### Data Fields
The fields are:
- `translation`: an object containing two items, constructed as key-value pairs:
- language code (key)
- text (value)
### Data Splits
More Information Needed...
| | Tain | Valid | Test |
|-------------------------|---------|-------|------|
| Input Sentences | 265,625 | 1316 | 2502 |
| Average Sentence Length | n/a | n/a | n/a |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
TED Talks
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
FBK - Fondazione Bruno Kessler, Trento, Italy
- Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, Marco Turchi
### Licensing Information
- TED talks are copyrighted by TED Conference LLC and licensed under a
Creative Commons Attribution-NonCommercial-NoDerivs 4.0
(cfr. https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy)
- the MuST-C corpus is released under the same Creative Commons
Attribution-NonCommercial-NoDerivs 4.0 License.
### Citation Information
Bibtex reference:
```
@article{CATTONI2021101155,
title = {MuST-C: A multilingual corpus for end-to-end speech translation},
journal = {Computer Speech & Language},
volume = {66},
pages = {101155},
year = {2021},
issn = {0885-2308},
doi = {https://doi.org/10.1016/j.csl.2020.101155},
url = {https://www.sciencedirect.com/science/article/pii/S0885230820300887},
author = {Roldano Cattoni and Mattia Antonino {Di Gangi} and Luisa Bentivogli and Matteo Negri and Marco Turchi},
keywords = {Spoken language translation, Multilingual corpus},
abstract = {End-to-end spoken language translation (SLT) has recently gained popularity thanks to the advancement of sequence to sequence learning in its two parent tasks: automatic speech recognition (ASR) and machine translation (MT). However, research in the field has to confront with the scarcity of publicly available corpora to train data-hungry neural networks. Indeed, while traditional cascade solutions can build on sizable ASR and MT training data for a variety of languages, the available SLT corpora suitable for end-to-end training are few, typically small and of limited language coverage. We contribute to fill this gap by presenting MuST-C, a large and freely available Multilingual Speech Translation Corpus built from English TED Talks. Its unique features include: i) language coverage and diversity (from English into 14 languages from different families), ii) size (at least 237 hours of transcribed recordings per language, 430 on average), iii) variety of topics and speakers, and iv) data quality. Besides describing the corpus creation methodology and discussing the outcomes of empirical and manual quality evaluations, we present baseline results computed with strong systems on each language direction covered by MuST-C.}
}```
[DOI available here](https://doi.org/10.1016/j.csl.2020.101155)
### Contributions
Thanks to [@dblandan](https://github.com/dblandan) for adding this dataset.
| kudo-research/mustc-en-es-text-only | [
"annotations_creators:other",
"language_creators:other",
"multilinguality:translation",
"size_categories:unknown",
"language:en",
"language:es",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en", "es"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["conditional-text-generation"], "task_ids": ["machine-translation"], "pretty_name": "must-c_en-es_text-only", "language_bcp47": ["en-US", "es-ES"]} | 2022-10-22T07:40:43+00:00 |
d1cc6c3bdcda13261efef2264eab18b75adb25b4 | kyryl0s/ukbbc | [
"license:wtfpl",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "wtfpl"} | 2022-02-06T21:25:07+00:00 |
|
c38ca7464e9934d9a49f88b3f60f5ad63b245465 | # Filtered WIT, an Image-Text Dataset.
A reliable Dataset to run Image-Text models.
You can find WIT, Wikipedia Image Text Dataset, [here](https://github.com/google-research-datasets/wit)
Data was taken from [dalle-mini/wit](https://huggingface.co/datasets/dalle-mini/wit)
## Author
- [Aarush Katta](https://github.com/ARKseal)
## Data Structure
The data is stored as tars, containing 10,000 samples per tar.
The parquets contain the metadata of each tar, which was crated using [this script](https://huggingface.co/datasets/laion/filtered-wit/blob/main/wit_create_meta.py)
Each tar contains a `.jpg`, `.txt`, and `.json`.
The image is stored in `.jpg`, the caption in `.txt.` and the metadata in `.json`
The preferred method to read the data is [WebDataset](https://github.com/webdataset/webdataset)
Here's an example:
```python
import webdataset as wds
dataset = wds.WebDataset('data/00000.tar').to_tuple('txt', 'jpg', 'json')
for text, image, meta in dataset:
print(
text[:50],
image[:50],
meta[:50]
)
```
## Filteration
Each sample has 8 possible captions which were compared to the image using [CLIP ViT-B32](https://arxiv.org/abs/2103.00020)
The text was encoded using [multilingual CLIP text encoder](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1)
Each possible caption was compared to the encoded image using Cosine Similarity
and kept if the sim was greater than `0.26`
Then the new caption was the filtered captions concatenated, and samples with no filtered caption were dropped.
The script used is [filter_wit.py](https://huggingface.co/datasets/laion/filtered-wit/blob/main/filter_wit.py)
| laion/filtered-wit | [
"arxiv:2103.00020",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-29T22:12:01+00:00 |
025f445e318a00406362710c57217bbef69aec6f |
# Dataset Card for Science Fiction TV Show Plots Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Format](#format)
- [Using the Dataset with Hugging Face](#call-scifi)
- [Original Dataset Structure](#dataset-structure)
- [Files in _OriginalStoriesSeparated_ Directory](#original-stories)
- [Additional Information](#additional-information)
- [Citation](#citation)
- [Licensing](#licensing)
## Dataset Description
A collection of long-running (80+ episodes) science fiction TV show plot synopses, scraped from Fandom.com wikis. Collected Nov 2017. Each episode is considered a "story".
Contains plot summaries from:
- Babylon 5 (https://babylon5.fandom.com/wiki/Main_Page) - 84 stories
- Doctor Who (https://tardis.fandom.com/wiki/Doctor_Who_Wiki) - 311 stories
- Doctor Who spin-offs - 95 stories
- Farscape (https://farscape.fandom.com/wiki/Farscape_Encyclopedia_Project:Main_Page) - 90 stories
- Fringe (https://fringe.fandom.com/wiki/FringeWiki) - 87 stories
- Futurama (https://futurama.fandom.com/wiki/Futurama_Wiki) - 87 stories
- Stargate (https://stargate.fandom.com/wiki/Stargate_Wiki) - 351 stories
- Star Trek (https://memory-alpha.fandom.com/wiki/Star_Trek) - 701 stories
- Star Wars books (https://starwars.fandom.com/wiki/Main_Page) - 205 stories, each book is a story
- Star Wars Rebels (https://starwarsrebels.fandom.com/wiki/Main_page) - 65 stories
- X-Files (https://x-files.fandom.com/wiki/Main_Page) - 200 stories
Total: 2276 stories
Dataset is "eventified" and generalized (see LJ Martin, P Ammanabrolu, X Wang, W Hancock, S Singh, B Harrison, and MO Riedl. Event Representations for Automated Story Generation with Deep Neural Nets, Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), 2018. for details on these processes.) and split into train-test-validation sets—separated by story so that full stories will stay together—for converting events into full sentences.
---
### Format
| Dataset Split | Number of Stories in Split | Number of Sentences in Split |
| ------------- |--------------------------- |----------------------------- |
| Train | 1737 | 257,108 |
| Validation | 194 | 32,855 |
| Test | 450 | 30,938 |
#### Using the Dataset with Hugging Face
```
from datasets import load_dataset
#download and load the data
dataset = load_dataset('lara-martin/Scifi_TV_Shows')
#you can then get the individual splits
train = dataset['train']
test = dataset['test']
validation = dataset['validation']
```
Each split has 7 attributes (explained in more detail in the next section):
```
>>> print(train)
Dataset({
features: ['story_num', 'story_line', 'event', 'gen_event', 'sent', 'gen_sent', 'entities'],
num_rows: 257108
})
```
---
## Original Dataset Structure
* File names: scifi-val.txt, scifi-test.txt, & scifi-train.txt
* Each sentence of the stories are split into smaller sentences and the events are extracted.
* Each line of the file contains information about a single sentence, delimited by "|||". Each line contains, in order:
* The story number
* The line number (within the story)
* 5-tuple events in a list (subject, verb, direct object, modifier noun, preposition); e.g.,
``
[[u'Voyager', u'run', 'EmptyParameter', u'deuterium', u'out'], [u'Voyager', u'force', u'go', 'EmptyParameter', 'EmptyParameter'], [u'Voyager', u'go', 'EmptyParameter', u'mode', u'into']]
``
* generalized 5-tuple events in a list; events are generalized using WordNet and VerbNet; e.g.,
``
[['<VESSEL>0', 'function-105.2.1', 'EmptyParameter', "Synset('atom.n.01')", u'out'], ['<VESSEL>0', 'urge-58.1-1', u'escape-51.1-1', 'EmptyParameter', 'EmptyParameter'], ['<VESSEL>0', u'escape-51.1-1', 'EmptyParameter', "Synset('statistic.n.01')", u'into']]
``
* original sentence (These sentences are split to contain fewer events per sentence. For the full original sentence, see the OriginalStoriesSeparated directory.); e.g.,
``
The USS Voyager is running out of deuterium as a fuel and is forced to go into Gray mode.
``
* generalized sentence; only nouns are generalized (using WordNet); e.g.,
``
the <VESSEL>0 is running out of Synset('atom.n.01') as a Synset('matter.n.03') and is forced to go into Synset('horse.n.01') Synset('statistic.n.01').
``
* a dictionary of numbered entities by tag within the _entire story_ (e.g. the second entity in the "<ORGANIZATION>" list in the dictionary would be <ORGANIZATION>1 in the story above—index starts at 0); e.g.,
``
{'<ORGANIZATION>': ['seven of nine', 'silver blood'], '<LOCATION>': ['sickbay', 'astrometrics', 'paris', 'cavern', 'vorik', 'caves'], '<DATE>': ['an hour ago', 'now'], '<MISC>': ['selected works', 'demon class', 'electromagnetic', 'parises', 'mimetic'], '<DURATION>': ['less than a week', 'the past four years', 'thirty seconds', 'an hour', 'two hours'], '<NUMBER>': ['two', 'dozen', '14', '15'], '<ORDINAL>': ['first'], '<PERSON>': ['tom paris', 'harry kim', 'captain kathryn janeway', 'tuvok', 'chakotay', 'jirex', 'neelix', 'the doctor', 'seven', 'ensign kashimuro nozawa', 'green', 'lt jg elanna torres', 'ensign vorik'], '<VESSEL>': ['uss voyager', 'starfleet']}
``
### Files in _OriginalStoriesSeparated_ Directory
* Contains unedited, unparsed original stories scraped from the respective Fandom wikis.
* Each line is a story with sentences space-separated. After each story, there is a <EOS> tag on a new line.
* There is one file for each of the 11 domains listed above.
* These are currently not set up to be called through the Hugging Face API and must be extracted from the zip directly.
---
## Additional Information
### Citation
```
@inproceedings{Ammanabrolu2020AAAI,
title={Story Realization: Expanding Plot Events into Sentences},
author={Prithviraj Ammanabrolu and Ethan Tien and Wesley Cheung and Zhaochen Luo and William Ma and Lara J. Martin and Mark O. Riedl},
journal={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
year={2020},
volume={34},
number={05},
url={https://ojs.aaai.org//index.php/AAAI/article/view/6232}
}
```
---
### Licensing
The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/ | lara-martin/Scifi_TV_Shows | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"story",
"storytelling",
"creative",
"summaries",
"TV",
"scifi",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "text2text-generation"], "pretty_name": "Scifi TV Shows", "tags": ["story", "storytelling", "creative", "summaries", "TV", "scifi"]} | 2024-02-08T20:57:46+00:00 |
fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 |
# PAC - Polish Abusive Clauses Dataset
''I have read and agree to the terms and conditions'' is one of the biggest lies on the Internet. Consumers rarely read the contracts they are required to accept. We conclude agreements over the Internet daily. But do we know the content of these agreements? Do we check potential unfair statements? On the Internet, we probably skip most of the Terms and Conditions. However, we must remember that we have concluded many more contracts. Imagine that we want to buy a house, a car, send our kids to the nursery, open a bank account, or many more. In all these situations, you will need to conclude the contract, but there is a high probability that you will not read the entire agreement with proper understanding. European consumer law aims to prevent businesses from using so-called ''unfair contractual terms'' in their unilaterally drafted contracts, requiring consumers to accept.
Our dataset treats ''unfair contractual term'' as the equivalent of an abusive clause. It could be defined as a clause that is unilaterally imposed by one of the contract's parties, unequally affecting the other, or creating a situation of imbalance between the duties and rights of the parties.
On the EU and at the national such as the Polish levels, agencies cannot check possible agreements by hand. Hence, we took the first step to evaluate the possibility of accelerating this process. We created a dataset and machine learning models to automate potentially abusive clauses detection partially. Consumer protection organizations and agencies can use these resources to make their work more effective and efficient. Moreover, consumers can automatically analyze contracts and understand what they agree upon.
## Tasks (input, output and metrics)
Abusive Clauses Detection
**Input** ('*text'* column): text of agreement
**Output** ('*label'* column): binary label (`BEZPIECZNE_POSTANOWIENIE_UMOWNE`: correct agreement statement, `KLAUZULA_ABUZYWNA`: abusive clause)
**Domain**: legal agreement
**Measurements**: Accuracy, F1 Macro
**Example***:*
Input: *`Wszelka korespondencja wysyłana przez Pożyczkodawcę na adres zamieszkania podany w umowie oraz na e-mail zostaje uznana za skutecznie doręczoną. Zmiana adresu e-mail oraz adresu zamieszkania musi być dostarczona do Pożyczkodawcy osobiście`*
Input (translated by DeepL): *`All correspondence sent by the Lender to the residential address provided in the agreement and to the e-mail address shall be deemed effectively delivered. Change of e-mail address and residential address must be delivered to the Lender in person`*
Output: `KLAUZULA_ABUZYWNA` (abusive clause)
## Data splits
| Subset | Cardinality (sentences) |
| ----------- | ----------------------: |
| train | 4284 |
| dev | 1519 |
| test | 3453 |
## Class distribution
`BEZPIECZNE_POSTANOWIENIE_UMOWNE` - means correct agreement statement.
`KLAUZULA_ABUZYWNA` informs us about abusive clause.
| Class | train | dev | test |
|:--------------------------------|--------:|-------------:|-------:|
| BEZPIECZNE_POSTANOWIENIE_UMOWNE | 0.5458 | 0.3002 | 0.6756 |
| KLAUZULA_ABUZYWNA | 0.4542 | 0.6998 | 0.3244 |
## License
[Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Citation
```bibtex
@inproceedings{NEURIPS2022_890b206e,
author = {Augustyniak, Lukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and Szymczak, Adrian and Janz, Arkadiusz and Szyma\'{n}ski, Piotr and W\k{a}troba, Marcin and Morzy, Miko\l aj and Kajdanowicz, Tomasz and Piasecki, Maciej},
booktitle = {Advances in Neural Information Processing Systems},
editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
pages = {21805--21818},
publisher = {Curran Associates, Inc.},
title = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/890b206ebb79e550f3988cb8db936f42-Paper-Datasets_and_Benchmarks.pdf},
volume = {35},
year = {2022}
}
``` | laugustyniak/abusive-clauses-pl | [
"task_categories:text-classification",
"annotations_creators:hired_annotators",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10<n<10K",
"language:pl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["hired_annotators"], "language_creators": ["found"], "language": ["pl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10<n<10K"], "task_categories": ["text-classification"], "task_ids": ["text-classification"], "pretty_name": "Polish-Abusive-Clauses"} | 2023-03-29T09:46:49+00:00 |
fbf9bb8761bafeb5d7e158901446da58f6a71d9c |
# Dataset Card for German Legal Sentences
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lavis-nlp.github.io/german_legal_sentences/
- **Repository:** https://github.com/lavis-nlp/german_legal_sentences
- **Paper:** coming soon
- **Leaderboard:**
- **Point of Contact:** [Marco Wrzalik](mailto:[email protected])
### Dataset Summary
German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342).
### Supported Tasks and Leaderboards
The main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows:
| Method | MRR@10 | MAP@200 | Recall@200 |
|-----------------------------------|---------:|-----------:|------------:|
| BM25 - default `(k1=1.2; b=0.75)` | 25.7 | 17.6 | 42.9 |
| BM25 - tuned `(k1=0.47; b=0.97)` | 26.2 | 18.1 | 43.3 |
| [CoRT](https://arxiv.org/abs/2010.10252) | 31.2 | 21.4 | 56.2 |
| [CoRT + BM25](https://arxiv.org/abs/2010.10252) | 32.1 | 22.1 | 67.1 |
In addition, we want to support a *Citation Recommendation* task in the future.
If you wish to contribute evaluation measures or give any suggestion or critique, please write an [e-mail](mailto:[email protected]).
### Languages
This dataset contains texts from the specific domain of German court decisions.
## Dataset Structure
### Data Instances
```
{'query.doc_id': 28860,
'query.ref_ids': [6215, 248, 248],
'query.sent_id': 304863,
'query.text': 'Zudem ist zu berücksichtigen , dass die Vollverzinsung nach '
'[REF] i. V. m. [REF] gleichermaßen zugunsten wie zulasten des '
'Steuerpflichtigen wirkt , sodass bei einer Überzahlung durch '
'den Steuerpflichtigen der Staat dem Steuerpflichtigen neben '
'der Erstattung ebenfalls den entstandenen potentiellen Zins- '
'und Liquiditätsnachteil in der pauschalierten Höhe des [REF] '
'zu ersetzen hat , unabhängig davon , in welcher Höhe dem '
'Berechtigten tatsächlich Zinsen entgangen sind .',
'related.doc_id': 56348,
'related.ref_ids': [248, 6215, 62375],
'related.sent_id': 558646,
'related.text': 'Ferner ist zu berücksichtigen , dass der Zinssatz des [REF] '
'im Rahmen des [REF] sowohl für Steuernachforderung wie auch '
'für Steuererstattungen und damit gleichermaßen zugunsten wie '
'zulasten des Steuerpflichtigen wirkt , Vgl. BVerfG , '
'Nichtannahmebeschluss vom [DATE] [REF] , juris , mit der '
'Folge , dass auch Erstattungsansprüche unabhängig davon , ob '
'und in welcher Höhe dem Berechtigten tatsächlich Zinsen '
'entgangen sind , mit monatlich 0,0 % verzinst werden .'}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The documents we take from [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342) are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, "§211 Absatz 1 des Strafgesetzbuches" is normalized to "§ 211 Abs. 1 StGB". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g `[REF321]`). At the same time we parse dates and replace them with the date tag `[DATE]`. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.
We use [SoMaJo](https://github.com/tsproisl/SoMaJo) to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it.
#### Who are the source language producers?
The source language originates in the context of German court proceedings.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotations are machine-generated.
### Personal and Sensitive Information
The source documents are already public and anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
With this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Coming soon!
### Contributions
Thanks to [@mwrzalik](https://github.com/mwrzalik) for adding this dataset. | lavis-nlp/german_legal_sentences | [
"task_categories:text-retrieval",
"task_ids:semantic-similarity-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n>1M",
"source_datasets:original",
"language:de",
"license:unknown",
"arxiv:2005.13342",
"arxiv:2010.10252",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["de"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n>1M"], "source_datasets": ["original"], "task_categories": ["text-retrieval", "text-scoring"], "task_ids": ["semantic-similarity-scoring", "text-retrieval-other-example-based-retrieval"]} | 2022-10-20T17:34:19+00:00 |
a8cc9be686d49bec5783b9556531620875d05eba |
# Dataset Card for CoFiF
## Dataset Description
- **Repository:** https://github.com/CoFiF/Corpus
- **Paper:** https://aclanthology.org/W19-5504/
- **Point of Contact:** Sina Ahmadi ([email protected]) and Tobias Daudert ([email protected])
### Dataset Summary
CoFiF is the first corpus comprising company reports in the French language. It contains over **188 million** tokens in **2655** reports, covering four types of documents:
- Reference documents (documents de référence) published annually, usually in the months following the end of the calendar year, and contain information regarding the financial situation and perspectives of a company
- Annual report (résultats annuels) which summarises a company’s business and activities throughout the previous year
- Semestrial (résultats semestriels): similar to annual reports in content but published every 6 months
- Trimestrial reports (résultats trimestriels): similar to annual reports but published every 3 months
These documents are collected from the 60 largest French companies listed in France’s main stock indices [CAC40](https://en.wikipedia.org/wiki/CAC_40) and [CAC Next 20](https://en.wikipedia.org/wiki/CAC_Next_20). The corpus spans over 20 years, ranging from 1995 to 2018.
### Supported Tasks and Leaderboards
Language modeling : can be used to train a French model (CamemBERT, FlauBERT, BARTHez, etc.) on data of financial types
### Languages
French
## Dataset Structure
Raw text
## Dataset Creation
### Curation Rationale
No French language datasets related to finance were available.
### Source Data
- Reference documents (documents de référence) published annually, usually in the months following the end of the calendar year, and contain information regarding the financial situation and perspectives of a company
- Annual report (résultats annuels) which summarises a company’s business and activities throughout the previous year
- Semestrial (résultats semestriels): similar to annual reports in content but published every 6 months
- Trimestrial reports (résultats trimestriels): similar to annual reports but published every 3 months
These documents are collected from the 60 largest French companies listed in France’s main stock indices CAC40 and CAC Next 20. The corpus spans over 20 years, ranging from 1995 to 2018.
#### Initial Data Collection and Normalization
The authors have done a cleanup but have not detailed it in their publication or on their GitHub directory.
#### Who are the source language producers?
Public administrative reports written by humans.
### Annotations
No annotations.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289, co-funded by the European Regional Development Fund.
### Licensing Information
The authors of the publication (Tobias Daudert and Sina Ahmadi) do not acknowledge any third party, so it appears that they were the only ones involved in the data collection.
This corpus is openly available for non-commercial use under the [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
If you're using CoFiF in your researches, please don't forget to cite [this paper](https://www.aclweb.org/anthology/papers/W/W19/W19-5504/):
~~~
@inproceedings{daudert-ahmadi-2019-cofif,
title = "{C}o{F}i{F}: A Corpus of Financial Reports in {F}rench Language",
author = "Daudert, Tobias and Ahmadi, Sina",
booktitle = "Proceedings of the First Workshop on Financial Technology and Natural Language Processing",
month = "12 " # aug,
year = "2019",
address = "Macao, China",
url = "https://www.aclweb.org/anthology/W19-5504",
pages = "21--26",
}
~~~ | FrancophonIA/CoFiF | [
"task_categories:fill-mask",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["fr"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["fill-mask"]} | 2023-10-25T15:34:56+00:00 |
f8bf264495f105aaae53aa559a1af98875c9f10c | # Dataset Card for `lbox_open`
## Dataset Description
- **Homepage:** `https://lbox.kr`
- **Repository:** `https://github.com/lbox-kr/lbox_open`
- **Point of Contact:** [Wonseok Hwang](mailto:[email protected])
### Dataset Summary
A Legal AI Benchmark Dataset from Korean Legal Cases.
### Languages
Korean
### How to use
```python
from datasets import load_dataset
# casename classficiation task
data_cn = load_dataset("lbox/lbox_open", "casename_classification")
data_cn_plus = load_dataset("lbox/lbox_open", "casename_classification_plus")
# statutes classification task
data_st = load_dataset("lbox/lbox_open", "statute_classification")
data_st_plus = load_dataset("lbox/lbox_open", "statute_classification_plus")
# Legal judgement prediction tasks
data_ljp_criminal = load_dataset("lbox/lbox_open", "ljp_criminal")
data_ljp_civil = load_dataset("lbox/lbox_open", "ljp_civil")
# case summarization task
data_summ = load_dataset("lbox/lbox_open", "summarization")
data_summ_plus = load_dataset("lbox/lbox_open", "summarization_plus")
# precedent corpus
data_corpus = load_dataset("lbox/lbox_open", "precedent_corpus")
```
For more information about the dataset, please visit <https://github.com/lbox-kr/lbox_open>.
## Licensing Information
Copyright 2022-present [LBox Co. Ltd.](https://lbox.kr/)
Licensed under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) | lbox/lbox_open | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "cc-by-nc-4.0"} | 2022-11-09T06:41:26+00:00 |
e82279f58c20065502e66ca496ca54a11fdb57cb | The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.
https://groups.inf.ed.ac.uk/ami/corpus/ | leoapolonio/AMI_Meeting_Corpus | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-27T12:16:24+00:00 |
317fdc0bede6f6ef0c941c30547395966f87f82a | 轴承故障诊断领域的NER语料,按BIO规则标注。 | leonadase/fdner | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-05-26T11:44:15+00:00 |
361a384e70b4a3950d91c839cc0b4650c83d3e7b | leonadase/mycoll3 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-23T09:41:49+00:00 |
|
e39b708dcfb727f0092341fd471c83de9c73864b |
# Dummy dataset to test evaluation framework for SUPERB. | lewtun/asr-preds-test | [
"benchmark:superb",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"benchmark": "superb", "task": "asr"} | 2021-07-28T00:22:53+00:00 |
75cac4251eb0dbc3282eaa5ff95c608032df6628 |
# Batch job
model_id: lewtun/superb-s3prl-osanseviero__hubert_base-asr-cbcd177a
dataset_name: superb
dataset_config: asr
dataset_split: test
dataset_column: file | lewtun/bulk-superb-s3p-superb-49606 | [
"benchmark:superb",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"benchmark": "superb", "task": "asr", "type": "prediction"} | 2021-08-02T15:38:29+00:00 |
3d61f200cf73279e51fb903b58f80de3fb344769 |
# GEM submissions for gem-sub-03
## Submitting to the benchmark
FILL ME IN
### Submission file format
Please follow this format for your `submission.json` file:
```json
{
"submission_name": "An identifying name of your system",
"param_count": 123, # the number of parameters your system has.
"description": "An optional brief description of the system that will be shown on the website",
"tasks":
{
"dataset_identifier": {
"values": ["output1", "output2", "..."], # A list of system outputs
# Optionally, you can add the keys which are part of an example to ensure that there is no shuffling mistakes.
"keys": ["key-0", "key-1", ...]
}
}
}
```
In this case, `dataset_identifier` is the identifier of the dataset followed by an identifier of the set the outputs were created from, for example `_validation` or `_test`. That means, the `mlsum_de` test set would have the identifier `mlsum_de_test`.
The `keys` field can be set to avoid accidental shuffling to impact your metrics. Simply add a list of the `gem_id` for each output example in the same order as your values.
### Validate your submission
To ensure that your submission files are correctly formatted, run the following command from the root of the repository:
```
python cli.py validate
```
If everything is correct, you should see the following message:
```
All submission files validated! ✨ 🚀 ✨
Now you can make a submission 🤗
```
### Push your submission to the Hugging Face Hub!
The final step is to commit your files and push them to the Hub:
```
python cli.py submit
```
If there are no errors, you should see the following message:
```
Submission successful! 🎉 🥳 🎉
Your submission will be evaulated on Sunday 05 September 2021 ⏳
```
where the evaluation is run every Sunday and your results will be visible on the leaderboard. | lewtun/gem-sub-03 | [
"benchmark:gem",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "T5-base (Baseline)"} | 2021-12-15T14:34:30+00:00 |
e3e3c81a4d7cc98d61d3b63b16b25e92008a9ba3 | # GitHub Issues Dataset | lewtun/github-issues-test | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-12T22:55:28+00:00 |
3bb24dcad2b45b45e20fc0accc93058dcbe8087d | # Dataset Card for GitHub Issues
## Dataset Description
- **Point of Contact:** [Lewis Tunstall]([email protected])
### Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
### Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
### Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
## Dataset Structure
### Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
```
{
'example_field': ...,
...
}
```
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `example_field`: description of `example_field`
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
### Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
### Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
#### Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
#### Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
### Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
### Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
## Considerations for Using the Data
### Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
### Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
### Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
Provide the license and link to the license webpage if available.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
```
If the dataset has a [DOI](https://www.doi.org/), please provide it here.
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset. | lewtun/github-issues | [
"arxiv:2005.00614",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-10-04T14:49:55+00:00 |
be8eb418f71d209bd05f3f1be13e916c283c6540 |
# Dataset Card for RAFT Submission | lewtun/mnist-preds | [
"benchmark:test",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"benchmark": "test"} | 2021-07-16T08:00:01+00:00 |
b66c0539f6b2df8daab58de1edb5371b19db5486 |
# Dataset Card for Demo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a demo dataset with two files `train.csv` and `test.csv`.
Load it by:
```python
from datasets import load_dataset
data_files = {"train": "train.csv", "test": "test.csv"}
demo = load_dataset("stevhliu/demo", data_files=data_files)
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | lewtun/my-awesome-dataset | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["conditional-text-generation"], "task_ids": ["summarization"]} | 2022-07-03T04:16:07+00:00 |
49e6a3b37d4666b3554ea90a6c76e02d07505fec |
Open Minuscule
==============
A little small wee corpus to train little small wee models.
## Dataset Description
### Dataset Summary
This is a raw text corpus, mainly intended for testing purposes.
### Languages
- French
- English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Source Data
It is a mashup including the following [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) licenced texts
- [*Rayons émis par les composés de l’uranium et du
thorium*](https://fr.wikisource.org/wiki/Rayons_%C3%A9mis_par_les_compos%C3%A9s_de_l%E2%80%99uranium_et_du_thorium),
Maria Skłodowska Curie
- [*Frankenstein, or the Modern
Prometheus*](https://en.wikisource.org/wiki/Frankenstein,_or_the_Modern_Prometheus_(Revised_Edition,_1831)),
Mary Wollstonecraft Shelley
- [*Les maîtres sonneurs*](https://fr.wikisource.org/wiki/Les_Ma%C3%AEtres_sonneurs), George Sand
It also includes the text of *Sketch of The Analytical Engine Invented by Charles Babbage With
notes upon the Memoir by the Translator* by Luigi Menabrea and Ada Lovelace, which to the best of
my knowledge should be public domain.
## Considerations for Using the Data
This really should not be used for anything but testing purposes
## Licence
This corpus is available under the Creative Commons Attribution-ShareAlike 4.0 License | lgrobol/openminuscule | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100k<n<1M",
"source_datasets:original",
"language:en",
"language:fr",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language_creators": ["crowdsourced"], "language": ["en", "fr"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100k<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Open Minuscule", "language_bcp47": ["en-GB", "fr-FR"]} | 2022-10-23T08:28:36+00:00 |
6bb129c79cbc02860807e12dd09bf9e152c3f73d |
# Dataset Card for "squad"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits Sample Size](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
### Dataset Summary
This dataset is a custom copy of the original SQuAD dataset. It is used to showcase dataset repositories. Data are the same as the original dataset.
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits Sample Size
| name |train|validation|
|----------|----:|---------:|
|plain_text|87599| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
### Annotations
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. | lhoestq/custom_squad | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:1606.05250",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"]} | 2022-10-25T08:50:53+00:00 |
87ecf163bedca9d80598b528940a9c4f99e14c11 |
# Dataset Card for Demo1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a demo dataset. It consists in two files `data/train.csv` and `data/test.csv`
You can load it with
```python
from datasets import load_dataset
demo1 = load_dataset("lhoestq/demo1")
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| lhoestq/demo1 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"type": "demo"} | 2021-11-08T14:36:41+00:00 |
8af5b3fc20bfa28cc0f09ddc1a0c0bcddf906e3a |
This is a test dataset | lhoestq/test | [
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["other-test"], "task_ids": ["other-test"], "pretty_name": "Test Dataset", "type": "test"} | 2022-07-01T14:26:34+00:00 |
dd797fcf8beacd44987048d5e2606edf1fe0a230 | This is a readme
| lhoestq/test2 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-23T13:21:45+00:00 |
7f14f05cd0effd0d847886ede953e6808c3e3a27 | 带情感标注 新浪微博({0: '喜悦', 1: '愤怒', 2: '厌恶', 3: '低落'}) | liam168/nlp_c4_sentiment | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-30T03:05:45+00:00 |
7b78af8a83bdeebb85f2d78f883acdb9c947c655 | 仅自用
出自:https://github.com/junzeng-pluto/ChineseSquad
感谢! | lijingxin/squad_zen | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-09T03:05:31+00:00 |
6aca57928d2edaffa6f9a29bdecaa789c28d0391 |
# Dataset Card for newsquadfr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [lincoln.fr](https://www.lincoln.fr/)
- **Repository:** [github/Lincoln-France](https://github.com/Lincoln-France)
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [email]([email protected])
### Dataset Summary
newsquadfr is a small dataset created for Question Answering task. Contexts are paragraphs of articles extracted from nine online french newspaper during year 2020/2021. newsquadfr stands for Newspaper question answering dataset in french. inspired by Piaf and Squad dataset. 2 520 triplets context - question - answer.
```py
from datasets import load_dataset
ds_name = 'lincoln/newsquadfr'
# exemple 1
ds_newsquad = load_dataset(ds_name)
# exemple 2
data_files = {'train': 'train.json', 'test': 'test.json', 'valid': 'valid.json'}
ds_newsquad = load_dataset(ds_name, data_files=data_files)
# exemple 3
ds_newsquad = load_dataset(ds_name, data_files=data_files, split="valid+test")
```
(train set)
| website | Nb |
|---------------|-----|
| cnews | 20 |
| francetvinfo | 40 |
| la-croix | 375 |
| lefigaro | 160 |
| lemonde | 325 |
| lesnumeriques | 70 |
| numerama | 140 |
| sudouest | 475 |
| usinenouvelle | 45 |
### Supported Tasks and Leaderboards
- extractive-qa
- open-domain-qa
### Languages
Fr-fr
## Dataset Structure
### Data Instances
```json
{'answers': {'answer_start': [53], 'text': ['manSuvre "agressive']},
'article_id': 34138,
'article_title': 'Caricatures, Libye, Haut-Karabakh... Les six dossiers qui '
'opposent Emmanuel Macron et Recep Tayyip Erdogan.',
'article_url': 'https://www.francetvinfo.fr/monde/turquie/caricatures-libye-haut-karabakh-les-six-dossiers-qui-opposent-emmanuel-macron-et-recep-tayyip-erdogan_4155611.html#xtor=RSS-3-[france]',
'context': 'Dans ce contexte déjà tendu, la France a dénoncé une manSuvre '
'"agressive" de la part de frégates turques à l\'encontre de l\'un '
"de ses navires engagés dans une mission de l'Otan, le 10 juin. "
'Selon Paris, la frégate Le Courbet cherchait à identifier un '
'cargo suspecté de transporter des armes vers la Libye quand elle '
'a été illuminée à trois reprises par le radar de conduite de tir '
"de l'escorte turque.",
'id': '2261',
'paragraph_id': 201225,
'question': "Qu'est ce que la France reproche à la Turquie?",
'website': 'francetvinfo'}
```
### Data Fields
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int64` feature.
- `article_id`: a `int64` feature.
- `article_title`: a string feature.
- `article_url`: a string feature.
- `context`: a `string` feature.
- `id`: a `string` feature.
- `paragraph_id`: a `int64` feature.
- `question`: a `string` feature.
- `website`: a `string` feature.
### Data Splits
| Split | Nb |
|-------|----|
| train |1650|
| test |415 |
| valid |455 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
Paragraphs were chosen according to theses rules:
- parent article must have more than 71% ASCII characters
- paragraphs size must be between 170 and 670 characters
- paragraphs shouldn't contain "A LIRE" or "A VOIR AUSSI"
Then, we stratified our original dataset to create this dataset according to :
- website
- number of named entities
- paragraph size
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
Using Piaf annotation tools. Three different persons mostly.
#### Who are the annotators?
Lincoln
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
- Annotation is not well controlled
- asking question on news is biaised
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/deed.fr
### Citation Information
[Needs More Information] | lincoln/newsquadfr | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:private",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:newspaper",
"source_datasets:online",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["private"], "language": ["fr-FR"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original", "newspaper", "online"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"]} | 2022-08-05T11:05:24+00:00 |
464a2f8d553fa88728761e7a29bc83ac8ebebcbe | ## PULPO
PULPO, the Prolific Unannotated Literary Poetry Corpus, is a set of multilingual corpora of verses and stanzas with over 95M words.
See https://arxiv.org/abs/2307.01387.
The following corpora has been downloaded using the [Averell](https://github.com/linhd-postdata/averell/) tool, developed by the [POSTDATA](https://postdata.linhd.uned.es/) team:
### Spanish
- [Disco v3](https://github.com/pruizf/disco)
- [Corpus of Spanish Golden-Age Sonnets](https://github.com/bncolorado/CorpusSonetosSigloDeOro)
- [Corpus general de poesía lírica castellana del Siglo de Oro](https://github.com/bncolorado/CorpusGeneralPoesiaLiricaCastellanaDelSigloDeOro)
- [Gongocorpus](https://github.com/linhd-postdata/gongocorpus) - [source](http://obvil.sorbonne-universite.site/corpus/gongora/gongora_obra-poetica)
### English
- [Eighteenth-Century Poetry Archive (ECPA)](https://github.com/alhuber1502/ECPA)
- [For better for verse](https://github.com/waynegraham/for_better_for_verse)
### French
- [Métrique en Ligne](https://crisco2.unicaen.fr/verlaine/index.php?navigation=accueil) - [source](https://github.com/linhd-postdata/metrique-en-ligne)
### Italian
- [Biblioteca italiana](https://github.com/linhd-postdata/biblioteca_italiana) - [source](http://www.bibliotecaitaliana.it/)
### Czech
- [Corpus of Czech Verse](https://github.com/versotym/corpusCzechVerse)
### Portuguese
- [Stichotheque](https://gitlab.com/stichotheque/stichotheque-pt)
Also, we obtained the following corpora from these sources:
### Spanish
- [Poesi.as](https://github.com/linhd-postdata/poesi.as) - [source](http://www.poesi.as/)
### English
- [A Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus)
### Arabic
- [Arabic Poetry dataset](https://www.kaggle.com/ahmedabelal/arabic-poetry)
### Chinese
- [THU Chinese Classical Poetry Corpus](https://github.com/THUNLP-AIPoet/Datasets/tree/master/CCPC)
### Finnish
- [SKVR](https://github.com/sks190/SKVR)
### German
- [TextGrid Poetry Corpus](https://github.com/linhd-postdata/textgrid-poetry) - [source](https://textgrid.de/en/digitale-bibliothek)
- [German Rhyme Corpus](https://github.com/tnhaider/german-rhyme-corpus)
### Hungarian
- [verskorpusz](https://github.com/ELTE-DH/verskorpusz)
### Portuguese
- [Poems in Portuguese](https://www.kaggle.com/oliveirasp6/poems-in-portuguese)
### Russian
- [19 000 Russian poems](https://www.kaggle.com/grafstor/19-000-russian-poems) | linhd-postdata/pulpo | [
"size_categories:10M<n<100M",
"language:es",
"language:en",
"language:fr",
"language:it",
"language:cs",
"language:pt",
"language:ar",
"language:zh",
"language:fi",
"language:de",
"language:hu",
"language:ru",
"poetry",
"arxiv:2307.01387",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["es", "en", "fr", "it", "cs", "pt", "ar", "zh", "fi", "de", "hu", "ru"], "size_categories": ["10M<n<100M"], "pretty_name": "Prolific Unannotated Literary Poetry Corpus", "tags": ["poetry"]} | 2023-07-10T12:38:07+00:00 |
1b0382449b4273d9de8e6d6ad15ca6873884758a | # C4 200M
# Dataset Summary
c4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` | liweili/c4_200m | [
"task_categories:text-generation",
"source_datasets:allenai/c4",
"language:en",
"grammatical-error-correction",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "source_datasets": ["allenai/c4"], "task_categories": ["text-generation"], "pretty_name": "C4 200M Grammatical Error Correction Dataset", "tags": ["grammatical-error-correction"]} | 2022-10-23T10:00:46+00:00 |
3fda50517775f10d7a541b8d3ba5711488c9aae5 | lkiouiou/o9ui7877687 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-04-04T17:04:32+00:00 |
|
5b55bbc6818bccd55b604d71bd1b288b46a050a2 | ## Data Description
Long-COVID related articles have been manually collected by information specialists.
Please find further information [here](https://doi.org/10.1093/database/baac048).
## Size
||Training|Development|Test|Total|
|--|--|--|--|--|
Positive Examples|215|76|70|345|
Negative Examples|199|62|68|345|
Total|414|238|138|690|
## Citation
@article{10.1093/database/baac048,
author = {Langnickel, Lisa and Darms, Johannes and Heldt, Katharina and Ducks, Denise and Fluck, Juliane},
title = "{Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID}",
journal = {Database},
volume = {2022},
year = {2022},
month = {07},
issn = {1758-0463},
doi = {10.1093/database/baac048},
url = {https://doi.org/10.1093/database/baac048},
note = {baac048},
eprint = {https://academic.oup.com/database/article-pdf/doi/10.1093/database/baac048/44371817/baac048.pdf},
} | llangnickel/long-covid-classification-data | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "pretty_name": "Dataset containing abstracts from PubMed, either related to long COVID or not. "} | 2022-11-24T10:29:58+00:00 |
936ce94b2393ccff9d8ab5e37c17c3cba70075f4 | # spoken-punctuation
Spoken punctuation for Speech-to-Text by language and locale.
## Disclaimer
Data collected from Google Cloud Speech-to-Text "Supported spoken punctuation" documentation: https://cloud.google.com/speech-to-text/docs/spoken-punctuation
| loretoparisi/spoken-punctuation | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-24T21:12:49+00:00 |
ebad3013a8a015074a69a9826d06c38b750e1bce |
# Dataset Card for MeLiSA (Mercado Libre for Sentiment Analysis)
** **NOTE: THIS CARD IS UNDER CONSTRUCTION** **
** **NOTE 2: THE RELEASED VERSION OF THIS DATASET IS A DEMO VERSION.** **
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Webpage:** https://github.com/lpsc-fiuba/MeLiSA
- **Paper:**
- **Point of Contact:** [email protected]
[More Information Needed]
### Dataset Summary
We provide a Mercado Libre product reviews dataset for spanish and portuguese text classification. The dataset contains reviews in these two languages collected between August 2020 and January 2021. Each record in the dataset contains the review content and title, the star rating, the country where it was pubilshed and the product category (arts, technology, etc.). The corpus is roughly balanced across stars, so each star rating constitutes approximately 20% of the reviews in each language.
| || Spanish ||| Portugese ||
|---|:------:|:----------:|:-----:|:------:|:----------:|:-----:|
| | Train | Validation | Test | Train | Validation | Test |
| 1 | 88.425 | 4.052 | 5.000 | 50.801 | 4.052 | 5.000 |
| 2 | 88.397 | 4.052 | 5.000 | 50.782 | 4.052 | 5.000 |
| 3 | 88.435 | 4.052 | 5.000 | 50.797 | 4.052 | 5.000 |
| 4 | 88.449 | 4.052 | 5.000 | 50.794 | 4.052 | 5.000 |
| 5 | 88.402 | 4.052 | 5.000 | 50.781 | 4.052 | 5.000 |
Table shows the number of samples per star rate in each split. There is a total of 442.108 training samples in spanish and 253.955 in portuguese. We limited the number of reviews per product to 30 and we perform a ranked inclusion of the downloaded reviews to include those with rich semantic content. In these ranking, the lenght of the review content and the valorization (difference between likes and dislikes) was prioritized. For more details on this process, see (CITATION).
Reviews in spanish were obtained from 8 different Latin Amercian countries (Argentina, Colombia, Peru, Uruguay, Chile, Venezuela and Mexico), and portuguese reviews were extracted from Brasil. To match the language with its respective country, we applied a language detection algorithm based on the works of Joulin et al. (2016a and 2016b) to determine the language of the review text and we removed reviews that were not written in the expected language.
[More Information Needed]
### Languages
The dataset contains reviews in Latin American Spanish and Portuguese.
## Dataset Structure
### Data Instances
Each data instance corresponds to a review. Each split is stored in a separated `.csv` file, so every row in each file consists on a review. For example, here we show a snippet of the spanish training split:
```csv
country,category,review_content,review_title,review_rate
...
MLA,Tecnología y electrónica / Tecnologia e electronica,Todo bien me fue muy util.,Muy bueno,2
MLU,"Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal",No fue lo que esperaba. El producto no me sirvió.,No fue el producto que esperé ,2
MLM,Tecnología y electrónica / Tecnologia e electronica,No fue del todo lo que se esperaba.,No me fue muy funcional ahí que hacer ajustes,2
...
```
### Data Fields
- `country`: The string identifier of the country. It could be one of the following: `MLA` (Argentina), `MCO` (Colombia), `MPE` (Peru), `MLU` (Uruguay), `MLC` (Chile), `MLV` (Venezuela), `MLM` (Mexico) or `MLB` (Brasil).
- `category`: String representation of the product's category. It could be one of the following:
- Hogar / Casa
- Tecnologı́a y electrónica / Tecnologia e electronica
- Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal
- Arte y entretenimiento / Arte e Entretenimiento
- Alimentos y Bebidas / Alimentos e Bebidas
- `review_content`: The text content of the review.
- `review_title`: The text title of the review.
- `review_rate`: An int between 1-5 indicating the number of stars.
### Data Splits
Each language configuration comes with it's own `train`, `validation`, and `test` splits. The `all_languages` split is simply a concatenation of the corresponding split across all languages. That is, the `train` split for `all_languages` is a concatenation of the `train` splits for each of the languages and likewise for `validation` and `test`.
## Dataset Creation
### Curation Rationale
The dataset is motivated by the desire to advance sentiment analysis and text classification in Latin American Spanish and Portuguese.
### Source Data
#### Initial Data Collection and Normalization
The authors gathered the reviews from the marketplaces in Argentina, Colombia, Peru, Uruguay, Chile, Venezuela and Mexico for the Spanish language and from Brasil for Portuguese. They prioritized reviews that contained relevant semantic content by applying a ranking filter based in the lenght and the valorization (difference betweent the number of likes and dislikes) of the review. They then ensured the correct language by applying a semi-automatic language detection algorithm, only retaining those of the target language. No normalization was applied to the review content or title.
Original products categories were grouped in higher level categories, resulting in five different types of products: "Home" (Hogar / Casa), "Technology and electronics" (Tecnologı́a y electrónica
/ Tecnologia e electronica), "Health, Dress and Personal Care" (Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal) and "Arts and Entertainment" (Arte y entretenimiento / Arte e Entretenimiento).
#### Who are the source language producers?
The original text comes from Mercado Libre customers reviewing products on the marketplace across a variety of product categories.
### Annotations
#### Annotation process
Each of the fields included are submitted by the user with the review or otherwise associated with the review. No manual or machine-driven annotation was necessary.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Mercado Libre Reviews are submitted by users with the knowledge and attention of being public. The reviewer ID's included in this dataset are anonymized, meaning that they are disassociated from the original user profiles. However, these fields would likely be easy to deannoymize given the public and identifying nature of free-form text responses.
## Considerations for Using the Data
### Social Impact of Dataset
Although Spanish and Portuguese languages are relatively high resource, most of the data is collected from European or United State users. This dataset is part of an effort to encourage text classification research in languages other than English and European Spanish and Portuguese. Such work increases the accessibility of natural language technology to more regions and cultures.
### Discussion of Biases
The data included here are from unverified consumers. Some percentage of these reviews may be fake or contain misleading or offensive language.
### Other Known Limitations
The dataset is constructed so that the distribution of star ratings is roughly balanced. This feature has some advantages for purposes of classification, but some types of language may be over or underrepresented relative to the original distribution of reviews to acheive this balance.
[More Information Needed]
## Additional Information
### Dataset Curators
Published by Lautaro Estienne, Matías Vera and Leonardo Rey Vega. Managed by the Signal Processing in Comunications Laboratory of the Electronic Department at the Engeneering School of the Buenos Aires University (UBA).
### Licensing Information
Amazon has licensed this dataset under its own agreement, to be found at the dataset webpage here:
https://docs.opendata.aws/amazon-reviews-ml/license.txt
### Citation Information
Please cite the following paper if you found this dataset useful:
(CITATION)
[More Information Needed]
### Contributions
[More Information Needed]
| lpsc-fiuba/melisa | [
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"source_datasets:original",
"language:es",
"language:pt",
"license:other",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["es", "pt"], "license": ["other"], "multilinguality": {"all_languages": ["multilingual"], "es": ["monolingual"], "pt": ["monolingual"]}, "size_categories": {"all_languages": ["100K<n<1M"], "es": ["100K<n<1M"], "pt": ["100K<n<1M"]}, "source_datasets": ["original"], "task_categories": ["conditional-text-generation", "sequence-modeling", "text-classification", "text-scoring"], "task_ids": ["language-modeling", "sentiment-classification", "sentiment-scoring", "summarization", "topic-classification"]} | 2022-10-22T07:52:56+00:00 |
2a8784deddebd5bfcd0cb9f276139f91e814b9c8 | lsb/ancient-latin-passages | [
"license:agpl-3.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "agpl-3.0"} | 2022-01-31T18:22:55+00:00 |
Subsets and Splits