modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/curlyjunglejake | fe0d1fde24aebb902b0ccf3f2965a982396a25f0 | 2021-05-21T23:51:27.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/curlyjunglejake | 9 | null | transformers | 12,300 | ---
language: en
thumbnail: https://www.huggingtweets.com/curlyjunglejake/1611588649017/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/866006337255227393/jLbqeyn3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dr. Jacob Glanville 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@curlyjunglejake bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@curlyjunglejake's tweets](https://twitter.com/curlyjunglejake).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2193</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>94</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>194</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1905</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wpg429u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @curlyjunglejake's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2u5lcs29) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2u5lcs29/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/curlyjunglejake'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/danielgedda | 1a36f0104c5c4ee508fc18af5fa6d84997a943a1 | 2021-05-22T00:34:52.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/danielgedda | 9 | null | transformers | 12,301 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1267943406304743424/QS6bXLq-_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Daniel Gedda Nuño 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@danielgedda bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@danielgedda's tweets](https://twitter.com/danielgedda).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3124</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>2715</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>36</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>373</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/xk4kfjse/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @danielgedda's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3lyvifcb) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3lyvifcb/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/danielgedda'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/disabledjess | 014fe0402ef420e26aac8b481ddf121ccaeb965a | 2021-05-22T01:44:27.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/disabledjess | 9 | null | transformers | 12,302 | ---
language: en
thumbnail: https://www.huggingtweets.com/disabledjess/1616670355194/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1336779061025267715/zRfiUbb7_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jess O'Brien 🤖 AI Bot </div>
<div style="font-size: 15px">@disabledjess bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@disabledjess's tweets](https://twitter.com/disabledjess).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 713 |
| Retweets | 324 |
| Short tweets | 34 |
| Tweets kept | 355 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dt08vg5c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @disabledjess's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zxrg63ip) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zxrg63ip/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/disabledjess')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/doctor_emmet | ad2d266c99f15d828c9261a27d27af7548091eb3 | 2021-05-22T01:53:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/doctor_emmet | 9 | null | transformers | 12,303 | ---
language: en
thumbnail: https://www.huggingtweets.com/doctor_emmet/1603833315216/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1250027548785938432/KHyOaVQY_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Emmet Burke 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@doctor_emmet bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@doctor_emmet's tweets](https://twitter.com/doctor_emmet).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2496</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>204</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>176</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2116</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/duj1xqx6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @doctor_emmet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/yzdnl9ld) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/yzdnl9ld/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/doctor_emmet'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/eiritana | b10e27a71574403423f8445c536e56ccd8ee3382 | 2021-05-22T02:49:05.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/eiritana | 9 | null | transformers | 12,304 | ---
language: en
thumbnail: https://www.huggingtweets.com/eiritana/1617882396659/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1178700495487164418/kNT2--o-_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Eiritana ᚖ エリタナ - XY Æ SR-71✨🌐 🍓🖤🤍🖤✨ 🤖 AI Bot </div>
<div style="font-size: 15px">@eiritana bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@eiritana's tweets](https://twitter.com/eiritana).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3226 |
| Retweets | 1692 |
| Short tweets | 567 |
| Tweets kept | 967 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nuy0f3c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eiritana's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2puqx075) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2puqx075/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eiritana')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/etcanada | 8568994512a753a86850a6be78ab7e98aa51d72d | 2021-05-22T03:32:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/etcanada | 9 | null | transformers | 12,305 | ---
language: en
thumbnail: https://www.huggingtweets.com/etcanada/1613324841076/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1159160036125564930/33nAmouA_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ET Canada 🤖 AI Bot </div>
<div style="font-size: 15px">@etcanada bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@etcanada's tweets](https://twitter.com/etcanada).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 27 |
| Short tweets | 13 |
| Tweets kept | 3201 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mkfurkr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @etcanada's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37a3w2d0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37a3w2d0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/etcanada')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/femboympreg | d5023508851141bf4834dc75a7ecb5eef2872590 | 2021-05-22T04:05:37.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/femboympreg | 9 | null | transformers | 12,306 | ---
language: en
thumbnail: https://www.huggingtweets.com/femboympreg/1617809081812/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1370404573374976005/WyjvD-FA_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Storm | 嵐 🤖 AI Bot </div>
<div style="font-size: 15px">@femboympreg bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@femboympreg's tweets](https://twitter.com/femboympreg).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3212 |
| Retweets | 594 |
| Short tweets | 969 |
| Tweets kept | 1649 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/30bwh0wo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @femboympreg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8vc73356) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8vc73356/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/femboympreg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/gatchabot | 3f7bc6397bf85915836024a9730d668de77ac129 | 2021-05-22T05:04:29.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/gatchabot | 9 | null | transformers | 12,307 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1234322984183226369/3KzZ3P1J_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">gatcha 🤖 AI Bot </div>
<div style="font-size: 15px">@gatchabot bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gatchabot's tweets](https://twitter.com/gatchabot).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2200 |
| Retweets | 1728 |
| Short tweets | 121 |
| Tweets kept | 351 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qhi9616/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gatchabot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o3eonr9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o3eonr9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gatchabot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/girlchrismarker | b85789619b1a7daae89905fbf46b9dc7bc109f65 | 2021-05-22T05:30:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/girlchrismarker | 9 | null | transformers | 12,308 | ---
language: en
thumbnail: https://www.huggingtweets.com/girlchrismarker/1614168569443/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1307921775364378624/yMwFpRpo_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">sátántangó nightcore 🤖 AI Bot </div>
<div style="font-size: 15px">@girlchrismarker bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@girlchrismarker's tweets](https://twitter.com/girlchrismarker).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 369 |
| Retweets | 67 |
| Short tweets | 79 |
| Tweets kept | 223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ex2qo7c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @girlchrismarker's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/e1iq56ka) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/e1iq56ka/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/girlchrismarker')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/granblue_en | 78ffb654ed40783dd3aa871df9f2516667d4d3be | 2021-05-22T06:05:45.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/granblue_en | 9 | null | transformers | 12,309 | ---
language: en
thumbnail: http://www.huggingtweets.com/granblue_en/1600399682930/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1255141505720672257/flNLLFAC_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">グランブルー EN 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@granblue_en bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@granblue_en's tweets](https://twitter.com/granblue_en).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3222</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>252</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>59</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2911</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2pwcb5ci/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @granblue_en's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2tq5wz9d) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2tq5wz9d/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/granblue_en'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/hannesbajohr | 8404983d47b97030aaa94da97fdb91ec2c3731e3 | 2021-05-22T06:34:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/hannesbajohr | 9 | null | transformers | 12,310 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/467172766416789504/01jisH73_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Hannes Bajohr 🤖 AI Bot </div>
<div style="font-size: 15px">@hannesbajohr bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@hannesbajohr's tweets](https://twitter.com/hannesbajohr).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3210 |
| Retweets | 1663 |
| Short tweets | 293 |
| Tweets kept | 1254 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/32cptzpn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hannesbajohr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2lxf36v7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2lxf36v7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hannesbajohr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/hardmaru | f86a5444f953121df346551bab75e5fbb82ccc3c | 2021-05-22T06:37:39.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/hardmaru | 9 | null | transformers | 12,311 | ---
language: en
thumbnail: https://www.huggingtweets.com/hardmaru/1620671462182/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1244133811278852097/rxL5LqpS_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">hardmaru</div>
<div style="text-align: center; font-size: 14px;">@hardmaru</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from hardmaru.
| Data | hardmaru |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 587 |
| Short tweets | 246 |
| Tweets kept | 2411 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rlh65t6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hardmaru's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bwhefwe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bwhefwe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hardmaru')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/itemlabel | d0aaf08b0703b2f714f7a281621ff446c38b193c | 2021-05-22T08:37:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/itemlabel | 9 | null | transformers | 12,312 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1359348009725808641/KyPjQGzk_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">itemLabel 🤖 AI Bot </div>
<div style="font-size: 15px">@itemlabel bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@itemlabel's tweets](https://twitter.com/itemlabel).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3188 |
| Retweets | 1796 |
| Short tweets | 389 |
| Tweets kept | 1003 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/10hookja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itemlabel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1u63m0wj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1u63m0wj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itemlabel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/johannesreck | f6dc23694bc54c5a2e44db0a82bebf1c6b8a2235 | 2021-05-22T09:55:59.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/johannesreck | 9 | null | transformers | 12,313 | ---
language: en
thumbnail: https://www.huggingtweets.com/johannesreck/1617820959621/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/657647990769872896/fzDbsUop_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Johannes Reck 🤖 AI Bot </div>
<div style="font-size: 15px">@johannesreck bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@johannesreck's tweets](https://twitter.com/johannesreck).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1579 |
| Retweets | 335 |
| Short tweets | 38 |
| Tweets kept | 1206 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d9mk25o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @johannesreck's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rjx3zio) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rjx3zio/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/johannesreck')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/johnlimouze | c39c5d8843f0cb252e0740e4ecd4d0c3a4dcb60c | 2021-05-22T09:58:04.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/johnlimouze | 9 | null | transformers | 12,314 | ---
language: en
thumbnail: https://www.huggingtweets.com/johnlimouze/1614164967543/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1247836262519771136/IzX0FhAt_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">John Limouze 🤖 AI Bot </div>
<div style="font-size: 15px">@johnlimouze bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@johnlimouze's tweets](https://twitter.com/johnlimouze).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3217 |
| Retweets | 402 |
| Short tweets | 615 |
| Tweets kept | 2200 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3q6a1aqr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @johnlimouze's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5916wbk0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5916wbk0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/johnlimouze')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jokowi | da03578fa16d4c622de7e16ad4f00b0b16ec52ab | 2021-05-22T10:02:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/jokowi | 9 | null | transformers | 12,315 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1299550083097059332/uK26iMOu_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joko Widodo</div>
<div style="text-align: center; font-size: 14px;">@jokowi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joko Widodo.
| Data | Joko Widodo |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 1 |
| Short tweets | 5 |
| Tweets kept | 3234 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c1qe98am/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jokowi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/gawgg6d1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/gawgg6d1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jokowi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/josephmama666 | 246dba2d0649f18ed0e8b2948ebfc0bf00abf133 | 2021-05-22T10:07:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/josephmama666 | 9 | null | transformers | 12,316 | ---
language: en
thumbnail: https://www.huggingtweets.com/josephmama666/1614134283340/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1337312159324258305/XLP7epZE_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">j 🤖 AI Bot </div>
<div style="font-size: 15px">@josephmama666 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@josephmama666's tweets](https://twitter.com/josephmama666).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3156 |
| Retweets | 1809 |
| Short tweets | 201 |
| Tweets kept | 1146 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/157t36eh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @josephmama666's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ahfjdey) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ahfjdey/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/josephmama666')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/l2k | 1d8f3d9ddc6bce20fcbeea298b6afe967208ccd5 | 2021-05-22T11:20:37.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/l2k | 9 | null | transformers | 12,317 | ---
language: en
thumbnail: http://res.cloudinary.com/huggingtweets/image/upload/v1599871089/l2k.jpg
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/573383872/img_0621_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lukas Biewald 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@l2k bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@l2k's tweets](https://twitter.com/l2k).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2580</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>598</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>88</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1894</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/17e2cw73/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @l2k's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/10mi5zis) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/10mi5zis/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/l2k'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/michelleobama | 865f44e599db26e135fd7dec62f95c939e71a802 | 2022-06-13T15:21:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/michelleobama | 9 | null | transformers | 12,318 | ---
language: en
thumbnail: http://www.huggingtweets.com/michelleobama/1655133694921/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507471015110139906/T9rDVcLd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Michelle Obama</div>
<div style="text-align: center; font-size: 14px;">@michelleobama</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Michelle Obama.
| Data | Michelle Obama |
| --- | --- |
| Tweets downloaded | 1932 |
| Retweets | 439 |
| Short tweets | 10 |
| Tweets kept | 1483 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2m7f8b6p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @michelleobama's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/200pdxti) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/200pdxti/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/michelleobama')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mikrodystopies | 8d0d47ccc5ab9b770bbcd7ce57994f74ae36da52 | 2021-05-22T14:41:31.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/mikrodystopies | 9 | null | transformers | 12,319 | ---
language: en
thumbnail: https://www.huggingtweets.com/mikrodystopies/1604658435538/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1313931951791902720/P5xuzPnM_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Mikrodystopies 🤖 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@mikrodystopies bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@mikrodystopies's tweets](https://twitter.com/mikrodystopies).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>1353</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>3</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1336</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3ujepu0f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mikrodystopies's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/6omc5zso) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/6omc5zso/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/mikrodystopies'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/mutilumila | 114dcb07237ac964d927fe5a47a94d815af35962 | 2021-05-22T15:35:13.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/mutilumila | 9 | null | transformers | 12,320 | ---
language: en
thumbnail: https://www.huggingtweets.com/mutilumila/1616785118212/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1367580181171470336/VGbeIwgL_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">p a ' u l 🤖 AI Bot </div>
<div style="font-size: 15px">@mutilumila bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@mutilumila's tweets](https://twitter.com/mutilumila).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3227 |
| Retweets | 432 |
| Short tweets | 618 |
| Tweets kept | 2177 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xkgonzr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mutilumila's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2oplbn5a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2oplbn5a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mutilumila')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/noellayoshino | 5dd1a4b6216db2b5e0c907eaad0be643656d886c | 2021-05-22T16:40:41.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/noellayoshino | 9 | null | transformers | 12,321 | ---
language: en
thumbnail: https://www.huggingtweets.com/noellayoshino/1620681697974/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1327039258998304768/RijuiRwR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Noella Ch. 💜 ENVtuber 💜 maybe pyon musk arc</div>
<div style="text-align: center; font-size: 14px;">@noellayoshino</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Noella Ch. 💜 ENVtuber 💜 maybe pyon musk arc.
| Data | Noella Ch. 💜 ENVtuber 💜 maybe pyon musk arc |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 349 |
| Short tweets | 1041 |
| Tweets kept | 1859 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ho6398t5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @noellayoshino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/r6l29rjm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/r6l29rjm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/noellayoshino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/notanastronomer | b404b8fd315c581b855df49498cdc870d4dfea92 | 2021-05-22T16:52:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/notanastronomer | 9 | null | transformers | 12,322 | ---
language: en
thumbnail: https://www.huggingtweets.com/notanastronomer/1616727503635/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1344888565176487936/SIjKeap6_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lauren Gilbert 🤖 AI Bot </div>
<div style="font-size: 15px">@notanastronomer bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@notanastronomer's tweets](https://twitter.com/notanastronomer).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3221 |
| Retweets | 255 |
| Short tweets | 349 |
| Tweets kept | 2617 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/a2lf1xnl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @notanastronomer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kzasb23) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kzasb23/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/notanastronomer')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/peterxinping | 18178d36056033e66815a5eb6b325d13c3a71641 | 2021-05-22T18:31:31.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/peterxinping | 9 | null | transformers | 12,323 | ---
language: en
thumbnail: https://www.huggingtweets.com/peterxinping/1604073988733/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1305634622982615040/IfCxeFKW_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Peter 🦍🍌 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@peterxinping bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@peterxinping's tweets](https://twitter.com/peterxinping).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3191</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>145</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>585</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2461</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/18v07hjh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @peterxinping's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2vg3a37t) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2vg3a37t/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/peterxinping'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/philosoraptor | 1f33b669f6f50d7445c66122bca15f57baf86afb | 2021-05-22T18:39:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/philosoraptor | 9 | null | transformers | 12,324 | ---
language: en
thumbnail: https://www.huggingtweets.com/philosoraptor/1616695417900/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/968909875/symbol_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Real organic pattern 🤖 AI Bot </div>
<div style="font-size: 15px">@philosoraptor bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@philosoraptor's tweets](https://twitter.com/philosoraptor).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3196 |
| Retweets | 700 |
| Short tweets | 278 |
| Tweets kept | 2218 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k8xlpzy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @philosoraptor's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5wwiewx7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5wwiewx7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/philosoraptor')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rocio_old | b06174e2de2e6e27c78580191f6d1c3948490f09 | 2021-05-22T21:18:47.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/rocio_old | 9 | null | transformers | 12,325 | ---
language: en
thumbnail: https://www.huggingtweets.com/rocio_old/1608309167358/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1008386786501038085/1GlH4lXi_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Rocio ☀ 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@rocio_old bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@rocio_old's tweets](https://twitter.com/rocio_old).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3012</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>590</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>491</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1931</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1efigh7w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rocio_old's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3tu41ukw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3tu41ukw/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/rocio_old'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/scrubphilosophy | f1d6e1c6c678b0c19f473e83f15c220bbdfb5773 | 2021-05-22T22:16:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/scrubphilosophy | 9 | null | transformers | 12,326 | ---
language: en
thumbnail: https://www.huggingtweets.com/scrubphilosophy/1616731281223/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1198090654263283719/Vud98Uvd_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Scrub 🤖 AI Bot </div>
<div style="font-size: 15px">@scrubphilosophy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@scrubphilosophy's tweets](https://twitter.com/scrubphilosophy).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1923 |
| Retweets | 512 |
| Short tweets | 467 |
| Tweets kept | 944 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39yhwp4h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scrubphilosophy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33gnfi5r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33gnfi5r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/scrubphilosophy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sellarsrespectr | 543833fb179574e0757b926a48995f2533e1838d | 2021-05-22T22:25:43.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/sellarsrespectr | 9 | null | transformers | 12,327 | ---
language: en
thumbnail: https://www.huggingtweets.com/sellarsrespectr/1616720155815/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1004831714231742464/zoP72CMZ_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">•Nate• •BLM• 🤖 AI Bot </div>
<div style="font-size: 15px">@sellarsrespectr bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@sellarsrespectr's tweets](https://twitter.com/sellarsrespectr).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 272 |
| Short tweets | 416 |
| Tweets kept | 2549 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s51p72h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sellarsrespectr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tus3zndp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tus3zndp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sellarsrespectr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sky_obito | d029d9f98678d8597951e37f2dec0bffdcc5be90 | 2021-05-22T23:00:58.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/sky_obito | 9 | null | transformers | 12,328 | ---
language: en
thumbnail: https://www.huggingtweets.com/sky_obito/1614214046985/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1347274090051117057/3fKG8-pm_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lenalee (CW: Dragon Prince) 🤖 AI Bot </div>
<div style="font-size: 15px">@sky_obito bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@sky_obito's tweets](https://twitter.com/sky_obito).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3113 |
| Retweets | 2349 |
| Short tweets | 236 |
| Tweets kept | 528 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z2vftrh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sky_obito's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/396z3s7q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/396z3s7q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sky_obito')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/spknnk | ad9d151cb65abaa95d3b5f5430f5bef2763d69eb | 2021-05-22T23:43:17.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/spknnk | 9 | null | transformers | 12,329 | ---
language: en
thumbnail: https://www.huggingtweets.com/spknnk/1616845130596/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1355067555254300673/j96wD3_V_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">я миша 🤖 AI Bot </div>
<div style="font-size: 15px">@spknnk bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@spknnk's tweets](https://twitter.com/spknnk).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 42 |
| Short tweets | 1066 |
| Tweets kept | 2142 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qqeli5b6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @spknnk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hgf21to) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hgf21to/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/spknnk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/spookymachine | 42cfad065b2e8e485d413385d9db29c24483050a | 2021-05-22T23:44:41.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/spookymachine | 9 | null | transformers | 12,330 | ---
language: en
thumbnail: https://www.huggingtweets.com/spookymachine/1617758539359/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1379523570473242625/YmJkdku3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Alea, Conjecture Of Goo 🤖 AI Bot </div>
<div style="font-size: 15px">@spookymachine bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@spookymachine's tweets](https://twitter.com/spookymachine).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3236 |
| Retweets | 217 |
| Short tweets | 254 |
| Tweets kept | 2765 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/p3syzv61/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @spookymachine's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g5tax8a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g5tax8a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/spookymachine')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/swedense | cd7b012c61530b2e3b7ffe8935fbd3eadc229338 | 2021-05-23T00:27:12.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/swedense | 9 | null | transformers | 12,331 | ---
language: en
thumbnail: https://www.huggingtweets.com/swedense/1603209768542/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000278977006/4c9e101ebb2a66314de5f74fb4bd7787_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Sweden.se 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@swedense bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@swedense's tweets](https://twitter.com/swedense).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3243</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>438</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>686</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2119</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/gn7q9sno/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @swedense's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3pxwkwmx) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3pxwkwmx/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/swedense'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/uwusman | c6b1b732896c723da9917dfc40cbec18e141a67c | 2021-05-23T03:30:45.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/uwusman | 9 | null | transformers | 12,332 | ---
language: en
thumbnail: https://www.huggingtweets.com/uwusman/1614213200557/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1362109761894772739/TQjSw0lI_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">UwUsman el Pez | piss arc 🤖 AI Bot </div>
<div style="font-size: 15px">@uwusman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@uwusman's tweets](https://twitter.com/uwusman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 576 |
| Short tweets | 629 |
| Tweets kept | 2036 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rutezz3k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @uwusman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3i0d4br9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3i0d4br9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/uwusman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
iarfmoose/wav2vec2-large-xlsr-kyrgyz | 6019fb1bd765e42c8ad1cf70944b661cf266d2db | 2021-07-06T05:57:02.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ky",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | iarfmoose | null | iarfmoose/wav2vec2-large-xlsr-kyrgyz | 9 | null | transformers | 12,333 | ---
language: ky
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Kyrgyz by Adam Montgomerie
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ky
type: common_voice
args: ky
metrics:
- name: Test WER
type: wer
value: 34.71
---
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ky", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ky", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�\\\\\\\\–\\\\\\\\—\\\\\\\\¬\\\\\\\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.71 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Kyrgyz/XLSR_Kyrgyz.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Kyrgyz/wav2vec2_ky_eval.ipynb) |
icelab/spacebert_CR | 039c85430ddfcccc7a8e6d1bb7ab78d1af456884 | 2022-02-16T09:29:17.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | icelab | null | icelab/spacebert_CR | 9 | null | transformers | 12,334 | ---
widget:
- text: "The CubeSat RF design shall either have one RF inhibit and a RF power output no greater than 1.5W at the transmitter antenna's RF input OR the CubeSat shall have a minimum of two independent RF inhibits (CDS 3.3.9) (ISO 5.5.6)."
---
---
# spacebert_CR
### Model desciption
This is a fine-tuned SpaceSciBERT model, for a Concept Recognition task, from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The [fine-tuning](https://github.com/strath-ace/smart-nlp/blob/master/SpaceTransformers/CR/CR_ECSS_dataset.json) dataset is available for download and consists of 874 unique manual annotated ECSS requirements.
The notebookfor fine-tuning can be accessed in Google Colab:
[](https://colab.research.google.com/drive/1EGh9bdxq6RqIzbvKuptAWvmIBG2EQJzJ?usp=sharing)
### BibTeX entry and citation info
```
@ARTICLE{ 9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659} }
``` |
ietz/distilroberta-base-finetuned-jira-qt-issue-titles-and-bodies | 95f5cebdc126e2242edaf333ae5aef38fbc4d063 | 2022-01-07T21:26:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"transformers",
"jira",
"code",
"issue",
"development",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | ietz | null | ietz/distilroberta-base-finetuned-jira-qt-issue-titles-and-bodies | 9 | null | transformers | 12,335 | ---
language:
- en
tags:
- jira
- code
- issue
- development
license: mit
---
`distilroberta-base` finetuned for masked language modeling on 247731 mixed issue titles (n=126213) and descriptions (n=121518). Trained for up to 50 epochs. |
inergi/wav2vec2-from-scratch-finetune-dummy | d499b1b5a5bbebbae952e52e6b93ed92eeef5cb4 | 2021-12-15T08:18:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | inergi | null | inergi/wav2vec2-from-scratch-finetune-dummy | 9 | null | transformers | 12,336 | ---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian by cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 25.86
---
Dummy Model New |
infinitejoy/wav2vec2-large-xls-r-300m-assamese | 3e7f332d83973ff852cdc66c596a731635f68c05 | 2022-03-24T11:53:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"as",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-assamese | 9 | 1 | transformers | 12,337 | ---
license: apache-2.0
language: as
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning
- as
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Assamese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: as
metrics:
- name: Test WER
type: wer
value: 72.64
- name: Test CER
type: cer
value: 27.35
---
# wav2vec2-large-xls-r-300m-assamese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_7_0 dataset.
It achieves the following results on the evaluation set:
- WER: 0.7954545454545454
- CER: 0.32341269841269843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
To compute the evaluation parameters
```bash
cd wav2vec2-large-xls-r-300m-assamese; python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config as --split test --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 16
- eval_batch_size: 8
- seed: not given
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------: |
| 1.584065 | NA | 400 | 1.584065 | 0.915512 |
| 1.658865 | Na | 800 | 1.658865 | 0.805096 |
| 1.882352 | NA | 1200 | 1.882352 | 0.820742 |
| 1.881240 | NA | 1600 | 1.881240 | 0.810907 |
| 2.159748 | NA | 2000 | 2.159748 | 0.804202 |
| 1.992871 | NA | 2400 | 1.992871 | 0.803308 |
| 2.201436 | NA | 2800 | 2.201436 | 0.802861 |
| 2.165218 | NA | 3200 | 2.165218 | 0.793920 |
| 2.253643 | NA | 3600 | 2.253643 | 0.796603 |
| 2.265880 | NA | 4000 | 2.265880 | 0.790344 |
| 2.293935 | NA | 4400 | 2.293935 | 0.797050 |
| 2.288851 | NA | 4800 | 2.288851 | 0.784086 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
infinitejoy/wav2vec2-large-xls-r-300m-kurdish | 2e2d1551533f1397a25fe742049655d69fd55df5 | 2022-03-23T18:33:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"kmr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-kurdish | 9 | 1 | transformers | 12,338 | ---
language:
- kmr
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- kmr
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Kurmanji Kurdish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: kmr
metrics:
- name: Test WER
type: wer
value: 102.308
- name: Test CER
type: cer
value: 538.748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kurdish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - KMR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2548
- Wer: 0.2688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.3161 | 12.27 | 2000 | 0.4199 | 0.4797 |
| 1.0643 | 24.54 | 4000 | 0.2982 | 0.3721 |
| 0.9718 | 36.81 | 6000 | 0.2762 | 0.3333 |
| 0.8772 | 49.08 | 8000 | 0.2586 | 0.3051 |
| 0.8236 | 61.35 | 10000 | 0.2575 | 0.2865 |
| 0.7745 | 73.62 | 12000 | 0.2603 | 0.2816 |
| 0.7297 | 85.89 | 14000 | 0.2539 | 0.2727 |
| 0.7079 | 98.16 | 16000 | 0.2554 | 0.2681 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
ionite/DialoGPT-medium-Sh0rtiAI | 42d0bd0380e451996fa7ab3afbcf872887b418fc | 2021-11-16T01:31:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | ionite | null | ionite/DialoGPT-medium-Sh0rtiAI | 9 | 1 | transformers | 12,339 | ---
tags:
- conversational
---
# Sh0rtiAI DialoGPT Model |
it5/it5-base-repubblica-to-ilgiornale | 45eb3ee75c9f147b4e4beec12a93c84eb31322ac | 2022-03-09T08:05:15.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"newspaper",
"ilgiornale",
"repubblica",
"style-transfer",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/it5-base-repubblica-to-ilgiornale | 9 | null | transformers | 12,340 | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- ilgiornale
- repubblica
- style-transfer
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
- headline-headline-consistency-classifier
- headline-article-consistency-classifier
model-index:
- name: it5-base-repubblica-to-ilgiornale
results:
- task:
type: headline-style-transfer-repubblica-to-ilgiornale
name: "Headline style transfer (Repubblica to Il Giornale)"
dataset:
type: gsarti/change_it
name: "CHANGE-IT"
metrics:
- type: rouge1
value: 0.272
name: "Test Rouge1"
- type: rouge2
value: 0.089
name: "Test Rouge2"
- type: rougeL
value: 0.235
name: "Test RougeL"
- type: bertscore
value: 0.396
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
- type: headline-headline-consistency-classifier
value: 0.883
name: "Test Headline-Headline Consistency Accuracy"
- type: headline-article-consistency-classifier
value: 0.880
name: "Test Headline-Article Consistency Accuracy"
co2_eq_emissions:
emissions: "17g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Base for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹
This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on news headline style transfer in the Repubblica to Il Giornale direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate an headline in the style of Il Giornale from the full body of an article written in the style of Repubblica. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
r2g = pipeline("text2text-generation", model='it5/it5-base-repubblica-to-ilgiornale')
r2g("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-repubblica-to-ilgiornale")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-repubblica-to-ilgiornale")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
it5/mt5-small-question-answering | 2db028bf1d12026e25024cf42e777ce509534e4b | 2022-03-09T07:57:03.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:squad_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"squad_it",
"text2text-question-answering",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/mt5-small-question-answering | 9 | null | transformers | 12,341 | ---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- italian
- sequence-to-sequence
- squad_it
- text2text-question-answering
- text2text-generation
widget:
- text: "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?"
- text: "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?"
- text: "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?"
- text: "La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?"
metrics:
- f1
- exact-match
model-index:
- name: mt5-small-question-answering
results:
- task:
type: question-answering
name: "Question Answering"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: f1
value: 0.660
name: "Test F1"
- type: exact-match
value: 0.560
name: "Test Exact Match"
co2_eq_emissions:
emissions: 17g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# mT5 Small for Question Answering ⁉️ 🇮🇹
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qa = pipeline("text2text-generation", model='it5/mt5-small-question-answering')
qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?")
>>> [{"generated_text": "ultimo massimo glaciale"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-question-answering")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-question-answering")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
jean-paul/KinyaBERT-small | 4575f11375376ae29a8b19610f70c07ee04d02eb | 2021-08-29T10:24:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | jean-paul | null | jean-paul/KinyaBERT-small | 9 | null | transformers | 12,342 | # Model description
A Pretrained model on the Kinyarwanda language dataset using a masked language modeling (MLM) objective. The BERT model was first introduced in [this paper](https://arxiv.org/abs/1810.04805). This KinyaBERT model was pretrained with uncased tokens which means that no difference between for example ikinyarwanda and Ikinyarwanda.
# Training parameters
#### Dataset
The data set used has both sources from the new articles in Rwanda extracted from different new web pages, dumped Wikipedia files, and the books in Kinyarwanda. The sizes of the sources of data are 72 thousand new articles, three thousand dumped Wikipedia articles, and six books with more than a thousand pages.
#### Hyperparameters
The model was trained with the default configuration of BERT and Trainer from the Huggingface. However, due to some resource computation issues, we kept the number of transformer layers to 6.
# How to use:
1) The model can be used directly with the pipeline for masked language modeling as follows:
```
from transformers import pipeline
the_mask_pipe = pipeline(
"fill-mask",
model='jean-paul/KinyaBERT-small',
tokenizer='jean-paul/KinyaBERT-small',
)
the_mask_pipe("Ejo ndikwiga nagize [MASK] baje kunsura.")
[{'sequence': 'ejo ndikwiga nagize ubwoba baje kunsura.', 'score': 0.15674786269664764, 'token': 2387, 'token_str': 'ubwoba'},
{'sequence': 'ejo ndikwiga nagize ngo baje kunsura.', 'score': 0.13958698511123657, 'token': 196, 'token_str': 'ngo'},
{'sequence': 'ejo ndikwiga nagize inyota baje kunsura.', 'score': 0.07670339196920395, 'token': 8797, 'token_str': 'inyota'},
{'sequence': 'ejo ndikwiga nagize amahirwe baje kunsura.', 'score': 0.07234629988670349, 'token': 1501, 'token_str': 'amahirwe'},
{'sequence': 'ejo ndikwiga nagize abana baje kunsura.', 'score': 0.05717536434531212, 'token': 526, 'token_str': 'abana'}]
```
2) Direct use from the transformer library to get features using AutoModel
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("jean-paul/KinyaBERT-small")
model = AutoModelForMaskedLM.from_pretrained("jean-paul/KinyaBERT-small")
input_text = "Ejo ndikwiga nagize abashyitsi baje kunsura."
encoded_input = tokenizer(input_text, return_tensors='pt')
output = model(**encoded_input)
```
__Note__: We used the huggingface implementations for pretraining BERT from scratch, both the BERT model and the classes needed to do it. |
jeniakim/hedgehog | d3a64a1c24dce72e4a52c63570f7faa744678f55 | 2022-03-30T09:27:38.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"transformers",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | jeniakim | null | jeniakim/hedgehog | 9 | 1 | transformers | 12,343 | ---
language: en
license: mit
inference: false
---
🦔 HEDGEhog 🦔: BERT-based multi-class uncertainty cues recognition
====================================================================
# Description
A fine-tuned multi-class classification model that detects four different types of uncertainty cues (a.k.a hedges) on a token level.
# Uncertainty types
label | type | description | example
---| ---| ---| ---
E | Epistemic | The proposition is possible, but its truth-value cannot be decided at the moment. | She **may** be already asleep.
I | Investigation | The proposition is in the process of having its truth-value determined. | She **examined** the role of NF-kappaB in protein activation.
D | Doxatic | The proposition expresses beliefs and hypotheses, which may be known as true or false by others. | She **believes** that the Earth is flat.
N | Condition | The proposition is true or false based on the truth-value of another proposition. | **If** she gets the job, she will move to Utrecht.
C | *certain* | *n/a* | *n/a*
# Intended uses and limitations
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
# How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.ner import NERModel
model = NERModel(
'bert',
'jeniakim/hedgehog',
use_cuda=False,
labels=["C", "D", "E", "I", "N"],
)
example = "As much as I definitely enjoy solitude, I wouldn't mind perhaps spending little time with you (Björk)"
predictions, raw_outputs = model.predict([example])
```
The predictions look like this:
```
[[{'As': 'C'},
{'much': 'C'},
{'as': 'C'},
{'I': 'C'},
{'definitely': 'C'},
{'enjoy': 'C'},
{'solitude,': 'C'},
{'I': 'C'},
{"wouldn't": 'C'},
{'mind': 'C'},
{'perhaps': 'E'},
{'spending': 'C'},
{'little': 'C'},
{'time': 'C'},
{'with': 'C'},
{'you': 'C'},
{'(Björk)': 'C'}]]
```
In other words, the token 'perhaps' is recognized as an **epistemic uncertainty cue** and all the other tokens are not uncertainty cues.
# Training Data
HEDGEhog is trained and evaluated on the [Szeged Uncertainty Corpus](https://rgai.inf.u-szeged.hu/node/160) (Szarvas et al. 2012<sup>1</sup>). The original sentence-level XML version of this dataset is available [here](https://rgai.inf.u-szeged.hu/node/160).
The token-level version that was used for the training can be downloaded from [here](https://1drv.ms/u/s!AvPkt_QxBozXk7BiazucDqZkVxLo6g?e=IisuM6) in a form of pickled pandas DataFrame's. You can download either the split sets (```train.pkl``` 137MB, ```test.pkl``` 17MB, ```dev.pkl``` 17MB) or the full dataset (```szeged_fixed.pkl``` 172MB). Each row in the df contains a token, its features (these are not relevant for HEDGEhog; they were used to train the baseline CRF model, see [here](https://github.com/vanboefer/uncertainty_crf)), its sentence ID, and its label.
# Training Procedure
The following training parameters were used:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 16
# Evaluation Results
class | precision | recall | F1-score | support
---|---|---|---|---
Epistemic | 0.90 | 0.85 | 0.88 | 624
Doxatic | 0.88 | 0.92 | 0.90 | 142
Investigation | 0.83 | 0.86 | 0.84 | 111
Condition | 0.85 | 0.87 | 0.86 | 86
Certain | 1.00 | 1.00 | 1.00 | 104,751
**macro average** | **0.89** | **0.90** | **0.89** | 105,714
# References
<sup>1</sup> Szarvas, G., Vincze, V., Farkas, R., Móra, G., & Gurevych, I. (2012). Cross-genre and cross-domain detection of semantic uncertainty. *Computational Linguistics, 38*(2), 335-367.
|
jky594176/recipe_BART2 | 0c53b41356f0550307b5706a5fb68eeaa1f1286b | 2021-05-31T21:04:14.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | jky594176 | null | jky594176/recipe_BART2 | 9 | null | transformers | 12,344 | Entry not found |
jpabbuehl/sagemaker-distilbert-emotion | 61884dd0430fa1f49fbd0efc1c08de3e489fc805 | 2021-11-20T14:22:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jpabbuehl | null | jpabbuehl/sagemaker-distilbert-emotion | 9 | null | transformers | 12,345 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.929
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1446
- Accuracy: 0.929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9345 | 1.0 | 500 | 0.2509 | 0.918 |
| 0.1855 | 2.0 | 1000 | 0.1626 | 0.928 |
| 0.1036 | 3.0 | 1500 | 0.1446 | 0.929 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
jrbarnard/t5-generate-answer | 661789130f1d7bc047db447b3e777ddd96c79892 | 2021-06-23T12:30:00.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | jrbarnard | null | jrbarnard/t5-generate-answer | 9 | null | transformers | 12,346 | Entry not found |
juliamendelsohn/framing_narrative | b84f6f08c81954ff16a3f1ed97984e6f5692ea60 | 2021-05-20T17:28:06.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | juliamendelsohn | null | juliamendelsohn/framing_narrative | 9 | null | transformers | 12,347 | Entry not found |
keshan/sinhala-gpt2 | a3472bef30f6db8d5f6f230cfdedca87ca29999c | 2021-07-11T17:53:31.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"gpt2",
"feature-extraction",
"si",
"dataset:mc4",
"transformers",
"Sinhala",
"text-generation"
]
| feature-extraction | false | keshan | null | keshan/sinhala-gpt2 | 9 | null | transformers | 12,348 | ---
language: si
tags:
- Sinhala
- text-generation
- gpt2
datasets:
- mc4
---
### Overview
This is a smaller GPT2 model trained on [MC4](https://github.com/allenai/allennlp/discussions/5056) Sinhala dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is GPT2 with the following specifications:
1. vocab_size=50257
2. n_embd=768
3. n_head=12
4. n_layer=12
5. n_positions=1024
## How to Use
You can use this model directly with a pipeline for casual language modeling:
```py
from transformers import pipeline
generator = pipeline('text-generation', model='keshan/sinhala-gpt2')
generator("මම", max_length=50, num_return_sequences=5)
```
|
kitaev/tetra-tag-en | a2ba70aec8fd516c02021fda82974dda02f66c78 | 2021-05-19T21:01:26.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kitaev | null | kitaev/tetra-tag-en | 9 | null | transformers | 12,349 | Entry not found |
kleinay/qanom-seq2seq-model-baseline | 417ef6dcec8ea534c7092263aa0d7f53c7dde13a | 2022-04-04T11:05:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:kleinay/qanom",
"transformers",
"semantic-role-labeling",
"question-answer generation",
"autotrain_compatible"
]
| text2text-generation | false | kleinay | null | kleinay/qanom-seq2seq-model-baseline | 9 | null | transformers | 12,350 | ---
language:
- en
tags:
- semantic-role-labeling
- question-answer generation
- pytorch
datasets:
- kleinay/qanom
---
# A Seq2Seq model for QANom parsing
This is a `t5-small` pretrained model, fine-tuned on the task of generating QANom QAs.
"QANom" stands for "QASRL for Nominalizations", which is an adaptation of [QASRL (Question-Answer driven Semantic Role Labeling)](https://qasrl.org) for the nominal predicates domain. See the [QANom paper](https://aclanthology.org/2020.coling-main.274/) for details about the task. The QANom Dataset official site is a [Google drive](https://drive.google.com/drive/folders/15PHKVdPm65ysgdkV47z6J_73kETk7_of), but we also wrapped it into a [Huggingface Dataset](https://huggingface.co/datasets/biu-nlp/qanom), which is easier to plug-and-play with (check out our [HF profile](https://huggingface.co/biu-nlp) for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).
## Demo
Visit [our demo](https://huggingface.co/spaces/kleinay/qanom-seq2seq-demo) for interactively exploring our model!
## Usage
The model and tokenizer can be downloaded as simply as running:
```python
import transformers
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
tokenizer = transformers.AutoTokenizer.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
```
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal for of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
In order to use the model for QANom parsing easily, we suggest downloading the [`pipeline.py`](https://huggingface.co/kleinay/qanom-seq2seq-model-baseline/blob/main/pipeline.py) file from this repository, and then use the `QASRL_Pipeline` class:
```python
from pipeline import QASRL_Pipeline
pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline")
pipe("The student was interested in Luke 's <predicate> research about see animals .", verb_form="research", predicate_type="nominal")
```
Which will output:
```json
[{'generated_text': 'who _ _ researched something _ _ ?<extra_id_7> Luke',
'QAs': [{'question': 'who researched something ?', 'answers': ['Luke']}]}]
```
You can learn more about using `transformers.pipelines` in the [official docs](https://huggingface.co/docs/transformers/main_classes/pipelines).
Notice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the `<predicate>` symbol, but you can also specify your own predicate marker:
```python
pipe("The student was interested in Luke 's <PRED> research about see animals .", verb_form="research", predicate_type="nominal", predicate_marker="<PRED>")
```
In addition, you can specify additional kwargs for controling the model's decoding algorithm:
```python
pipe("The student was interested in Luke 's <predicate> research about see animals .", verb_form="research", predicate_type="nominal", num_beams=3)
```
|
kykim/t5-kor-small | 29fbf536a797fdcb63c4af4b71b70df615e18d53 | 2021-06-23T12:31:25.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"feature-extraction",
"transformers"
]
| feature-extraction | false | kykim | null | kykim/t5-kor-small | 9 | null | transformers | 12,351 | Entry not found |
lgris/sew-tiny-portuguese-cv | b924758fccaff0db4c5a8baa8b699fab25ecf68a | 2022-03-23T18:27:49.000Z | [
"pytorch",
"sew",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | lgris | null | lgris/sew-tiny-portuguese-cv | 9 | null | transformers | 12,352 | ---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- pt
- robust-speech-event
datasets:
- common_voice
model-index:
- name: sew-tiny-portuguese-cv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 30.02
- name: Test CER
type: cer
value: 10.34
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 56.46
- name: Test CER
type: cer
value: 22.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 57.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 61.3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-tiny-portuguese-cv
This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5110
- Wer: 0.2842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| No log | 4.92 | 1000 | 0.8468 | 0.6494 |
| 3.4638 | 9.85 | 2000 | 0.4978 | 0.3815 |
| 3.4638 | 14.78 | 3000 | 0.4734 | 0.3417 |
| 0.9904 | 19.7 | 4000 | 0.4577 | 0.3344 |
| 0.9904 | 24.63 | 5000 | 0.4376 | 0.3170 |
| 0.8849 | 29.55 | 6000 | 0.4225 | 0.3118 |
| 0.8849 | 34.48 | 7000 | 0.4354 | 0.3080 |
| 0.819 | 39.41 | 8000 | 0.4434 | 0.3004 |
| 0.819 | 44.33 | 9000 | 0.4710 | 0.3132 |
| 0.7706 | 49.26 | 10000 | 0.4497 | 0.3064 |
| 0.7706 | 54.19 | 11000 | 0.4598 | 0.3100 |
| 0.7264 | 59.11 | 12000 | 0.4271 | 0.3013 |
| 0.7264 | 64.04 | 13000 | 0.4333 | 0.2959 |
| 0.6909 | 68.96 | 14000 | 0.4554 | 0.3019 |
| 0.6909 | 73.89 | 15000 | 0.4444 | 0.2888 |
| 0.6614 | 78.81 | 16000 | 0.4734 | 0.3081 |
| 0.6614 | 83.74 | 17000 | 0.4820 | 0.3058 |
| 0.6379 | 88.67 | 18000 | 0.4416 | 0.2950 |
| 0.6379 | 93.59 | 19000 | 0.4614 | 0.2974 |
| 0.6055 | 98.52 | 20000 | 0.4812 | 0.3018 |
| 0.6055 | 103.45 | 21000 | 0.4700 | 0.3018 |
| 0.5823 | 108.37 | 22000 | 0.4726 | 0.2999 |
| 0.5823 | 113.3 | 23000 | 0.4979 | 0.2887 |
| 0.5597 | 118.23 | 24000 | 0.4813 | 0.2980 |
| 0.5597 | 123.15 | 25000 | 0.4968 | 0.2972 |
| 0.542 | 128.08 | 26000 | 0.5331 | 0.3059 |
| 0.542 | 133.0 | 27000 | 0.5046 | 0.2978 |
| 0.5185 | 137.93 | 28000 | 0.4882 | 0.2922 |
| 0.5185 | 142.85 | 29000 | 0.4945 | 0.2938 |
| 0.499 | 147.78 | 30000 | 0.4971 | 0.2913 |
| 0.499 | 152.71 | 31000 | 0.4948 | 0.2873 |
| 0.4811 | 157.63 | 32000 | 0.4924 | 0.2918 |
| 0.4811 | 162.56 | 33000 | 0.5128 | 0.2911 |
| 0.4679 | 167.49 | 34000 | 0.5098 | 0.2892 |
| 0.4679 | 172.41 | 35000 | 0.4966 | 0.2863 |
| 0.456 | 177.34 | 36000 | 0.5033 | 0.2839 |
| 0.456 | 182.27 | 37000 | 0.5114 | 0.2875 |
| 0.4453 | 187.19 | 38000 | 0.5154 | 0.2859 |
| 0.4453 | 192.12 | 39000 | 0.5102 | 0.2847 |
| 0.4366 | 197.04 | 40000 | 0.5110 | 0.2842 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
liamliang/hate_speech_content | 1dc3ab4825b27fb2a1f6e947185386ac6c9c993b | 2021-05-19T21:58:27.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | liamliang | null | liamliang/hate_speech_content | 9 | null | transformers | 12,353 | Entry not found |
liyijing024/hate_speech_target | edfd3575267df0c9df231861e57abdc7ddbfac95 | 2021-11-23T18:23:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | liyijing024 | null | liyijing024/hate_speech_target | 9 | null | transformers | 12,354 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_grad_adam | 456b78f0769e43207b0f1f26af35444f68d8a07f | 2021-10-29T21:07:50.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_grad_adam | 9 | null | transformers | 12,355 | Entry not found |
m3hrdadfi/icelandic-ner-bert | 2e9742e6d84b25b312654fb49d9d42fce070c7fe | 2021-05-27T17:14:13.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"is",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | m3hrdadfi | null | m3hrdadfi/icelandic-ner-bert | 9 | null | transformers | 12,356 | ---
language: is
license: apache-2.0
widget:
- text: "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ."
- text: "Til hvers að kjósa flokk , sem þykist vera Jafnaðarmannaflokkur rétt fyrir kosningar , þegar að það er hægt að kjósa sannnan jafnaðarmannaflokk , sjálfan Jafnaðarmannaflokk Íslands - Samfylkinguna ."
- text: "Það sannaðist svo eftirminnilega á plötunni Það þarf fólk eins og þig sem kom út fyrir þremur árum , en á henni hann Fálka úr Keflavík og Gáluna , son sinn , til að útsetja lög hans og spila inn ."
- text: "Lögin hafa áður komið út sem aukalög á smáskífum af Hail to the Thief , en á disknum er líka myndband og fleira efni fyrir tölvur ."
- text: "Britney gerði honum viðvart og hann ók henni á UCLA-sjúkrahúsið í Santa Monica en það er í nágrenni hljóðversins ."
---
# IcelandicNER BERT
This model was fine-tuned on the MIM-GOLD-NER dataset for the Icelandic language.
The [MIM-GOLD-NER](http://hdl.handle.net/20.500.12537/42) corpus was developed at [Reykjavik University](https://en.ru.is/) in 2018–2020 that covered eight types of entities:
- Date
- Location
- Miscellaneous
- Money
- Organization
- Percent
- Person
- Time
## Dataset Information
| | Records | B-Date | B-Location | B-Miscellaneous | B-Money | B-Organization | B-Percent | B-Person | B-Time | I-Date | I-Location | I-Miscellaneous | I-Money | I-Organization | I-Percent | I-Person | I-Time |
|:------|----------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|
| Train | 39988 | 3409 | 5980 | 4351 | 729 | 5754 | 502 | 11719 | 868 | 2112 | 516 | 3036 | 770 | 2382 | 50 | 5478 | 790 |
| Valid | 7063 | 570 | 1034 | 787 | 100 | 1078 | 103 | 2106 | 147 | 409 | 76 | 560 | 104 | 458 | 7 | 998 | 136 |
| Test | 8299 | 779 | 1319 | 935 | 153 | 1315 | 108 | 2247 | 172 | 483 | 104 | 660 | 167 | 617 | 10 | 1089 | 158 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| entity | precision | recall | f1-score | support |
|:-------------:|:---------:|:--------:|:--------:|:-------:|
| Date | 0.969466 | 0.978177 | 0.973802 | 779.0 |
| Location | 0.955201 | 0.953753 | 0.954476 | 1319.0 |
| Miscellaneous | 0.867033 | 0.843850 | 0.855285 | 935.0 |
| Money | 0.979730 | 0.947712 | 0.963455 | 153.0 |
| Organization | 0.893939 | 0.897338 | 0.895636 | 1315.0 |
| Percent | 1.000000 | 1.000000 | 1.000000 | 108.0 |
| Person | 0.963028 | 0.973743 | 0.968356 | 2247.0 |
| Time | 0.976879 | 0.982558 | 0.979710 | 172.0 |
| micro avg | 0.938158 | 0.938958 | 0.938558 | 7028.0 |
| macro avg | 0.950659 | 0.947141 | 0.948840 | 7028.0 |
| weighted avg | 0.937845 | 0.938958 | 0.938363 | 7028.0 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "m3hrdadfi/icelandic-ner-bert"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [IcelandicNER Issues](https://github.com/m3hrdadfi/icelandic-ner/issues) repo.
|
maelfabien/marcel_customer_service_large | 19b2639f4a19c624d23c9221dc92983f3f655bc9 | 2021-04-13T23:23:56.000Z | [
"pytorch",
"camembert",
"text-generation",
"transformers"
]
| text-generation | false | maelfabien | null | maelfabien/marcel_customer_service_large | 9 | null | transformers | 12,357 | Entry not found |
maelfabien/marcel_customer_service_medium | 25a1ddb0d0ec9d1665288f9dfa64ab16ad6d501f | 2021-04-13T23:42:16.000Z | [
"pytorch",
"camembert",
"text-generation",
"transformers"
]
| text-generation | false | maelfabien | null | maelfabien/marcel_customer_service_medium | 9 | null | transformers | 12,358 | Entry not found |
malay-huggingface/xlnet-large-bahasa-cased | e9ab399fc4e57730b3321e9c4df0581c9ad89545 | 2021-09-26T12:57:26.000Z | [
"pytorch",
"xlnet",
"feature-extraction",
"ms",
"transformers"
]
| feature-extraction | false | malay-huggingface | null | malay-huggingface/xlnet-large-bahasa-cased | 9 | null | transformers | 12,359 | ---
language: ms
---
# xlnet-large-bahasa-cased
Pretrained XLNET large language model for Malay.
## Pretraining Corpus
`xlnet-large-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/xlnet](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/xlnet).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import XLNetModel, XLNetTokenizer
model = XLNetModel.from_pretrained('malay-huggingface/xlnet-large-bahasa-cased')
tokenizer = XLNetTokenizer.from_pretrained(
'malay-huggingface/xlnet-large-bahasa-cased',
do_lower_case = False,
)
``` |
maxidl/iML-distilbert-base-uncased-predict | 2fd10dd30d3f71245a4db3533f5ea718a22e93b0 | 2021-11-18T21:47:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | maxidl | null | maxidl/iML-distilbert-base-uncased-predict | 9 | null | transformers | 12,360 | Entry not found |
maximedb/paws-x-all-x-en | b89acf3dc80b19aad39ca62eb18d57d188feb7d4 | 2021-10-20T18:40:36.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | maximedb | null | maximedb/paws-x-all-x-en | 9 | null | transformers | 12,361 | Entry not found |
mbeukman/xlm-roberta-base-finetuned-ner-wolof | 0afdf7ac5fe7e91fe1f2cafc8ec6f995b94b1d2b | 2021-11-25T09:04:43.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"wo",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-ner-wolof | 9 | null | transformers | 12,362 | ---
language:
- wo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "SAFIYETU BÉEY Céy Koronaa !"
---
# xlm-roberta-base-finetuned-ner-wolof
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Wolof part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-wolof) (This model) | [base](https://huggingface.co/xlm-roberta-base) | wol | 66.12 | 69.46 | 63.09 | 30.00 | 84.00 | 54.00 | 59.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | wol | 69.01 | 73.25 | 65.23 | 27.00 | 85.00 | 52.00 | 67.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | wol | 69.02 | 67.60 | 70.51 | 30.00 | 84.00 | 44.00 | 71.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-wolof'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "SAFIYETU BÉEY Céy Koronaa !"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili | a239f32f3b22fe5a91bb64db052c63ba46ebd5ff | 2021-11-25T09:05:03.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili | 9 | null | transformers | 12,363 | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
michaelrglass/rag-token-nq-kgi0-zsre | 823f8804a0e07cc0e55c67aaf0ed549e462ba96e | 2021-04-20T12:53:18.000Z | [
"pytorch",
"rag",
"transformers"
]
| null | false | michaelrglass | null | michaelrglass/rag-token-nq-kgi0-zsre | 9 | null | transformers | 12,364 | Entry not found |
mnaylor/base-bert-finetuned-mtsamples | 855d6362b3fc8e2a0b05213c7a719d594560ca6e | 2021-07-19T15:53:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | mnaylor | null | mnaylor/base-bert-finetuned-mtsamples | 9 | null | transformers | 12,365 | # BERT Base Fine-tuned on MTSamples
This model is [BERT-base](https://huggingface.co/bert-base-uncased) fine-tuned on the MTSamples dataset, with a classification task defined in [this repo](https://github.com/socd06/medical-nlp).
|
monologg/kocharelectra-base-kmounlp-ner | faa4a5584d173ceda8dc9ffb14f9a8fe150535d1 | 2020-12-02T15:28:07.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | monologg | null | monologg/kocharelectra-base-kmounlp-ner | 9 | null | transformers | 12,366 | Entry not found |
mrm8488/RoBERTinha | 389c1984d098683d415cd070d970f1a6dc841060 | 2021-05-20T18:03:32.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"gl",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | mrm8488 | null | mrm8488/RoBERTinha | 9 | null | transformers | 12,367 | ---
language: gl
widget:
- text: "Galicia é unha <mask> autónoma española."
- text: "A lingua oficial de Galicia é o <mask>."
---
# RoBERTinha: RoBERTa-like Language model trained on OSCAR Galician corpus
|
mrm8488/bert-tiny-3-finetuned-squadv2 | 6ad3c80fdd0769c6ded488accfd81ad1902f5fba | 2021-05-20T00:39:15.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | mrm8488 | null | mrm8488/bert-tiny-3-finetuned-squadv2 | 9 | null | transformers | 12,368 | Entry not found |
mrm8488/bert2bert-mini_shared-question-generation | cda01c12dbe172f223daf3d88dc18cd4eb4b8396 | 2020-12-26T12:52:16.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/bert2bert-mini_shared-question-generation | 9 | null | transformers | 12,369 | Entry not found |
mrm8488/bert2bert_shared-italian-question-generation | 5260aa5037a7fab18c8986b1ba2980f9d2d43017 | 2020-12-11T14:33:07.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/bert2bert_shared-italian-question-generation | 9 | null | transformers | 12,370 | Entry not found |
mrm8488/bioclinical-roberta-es-finenuned-clinical-ner | 55a5299ef9cfe626a5fd7b34b27913f978b4a4e4 | 2022-01-24T16:04:07.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | mrm8488 | null | mrm8488/bioclinical-roberta-es-finenuned-clinical-ner | 9 | null | transformers | 12,371 | Entry not found |
mrm8488/codebert2codebert-finetuned-code-refinement-small | d74879d65ed6e886c22de0c62688dbfe52073d8d | 2021-06-11T14:36:00.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/codebert2codebert-finetuned-code-refinement-small | 9 | null | transformers | 12,372 | Entry not found |
mrm8488/distilroberta-finetuned-rotten_tomatoes-sentiment-analysis | e385ca4b241e411e8de0a7de6e36afc5548254d0 | 2021-08-26T15:28:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | mrm8488 | null | mrm8488/distilroberta-finetuned-rotten_tomatoes-sentiment-analysis | 9 | null | transformers | 12,373 | Entry not found |
mrm8488/electricidad-base-finetuned-medical-diagnostics | e448e764f4eec528e58ad5091ae3cefdc2b94e16 | 2021-10-04T17:03:07.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | mrm8488 | null | mrm8488/electricidad-base-finetuned-medical-diagnostics | 9 | null | transformers | 12,374 | ---
lang: 'es'
widget:
- text: "TUMOR DE COMPORTAMIENTO INCIERTO O DESCONOCIDO DEL HNGADO, DE LA VESNCULA BILIAR Y DEL CONDUCTO BILIAR - DiagnNstico Principal - Z01.8 OTROS EXNMENES ESPECIALES ESPECIFICADOS"
---
# Electricidad (base) fine-tuned medical diagnostics |
mrm8488/mobilebert-uncased-finetuned-squadv1 | 4611efec8432379bfb867f2f48fe5b55d62d3f10 | 2020-12-11T21:54:41.000Z | [
"pytorch",
"mobilebert",
"question-answering",
"en",
"dataset:squad",
"arxiv:2004.02984",
"transformers",
"autotrain_compatible"
]
| question-answering | false | mrm8488 | null | mrm8488/mobilebert-uncased-finetuned-squadv1 | 9 | null | transformers | 12,375 | ---
language: en
datasets:
- squad
---
# MobileBERT + SQuAD (v1.1) 📱❓
[mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) fine-tuned on [SQUAD v2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
**MobileBERT** is a thin version of *BERT_LARGE*, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
The checkpoint used here is the original MobileBert Optimized Uncased English: (uncased_L-24_H-128_B-512_A-4_F-4_OPT) checkpoint.
More about the model [here](https://arxiv.org/abs/2004.02984)
## Details of the downstream task (Q&A) - Dataset 📚
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type bert \
--model_name_or_path 'google/mobilebert-uncased' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v1.1.json' \
--predict_file '/content/dataset/dev-v1.1.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 5 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/output' \
--overwrite_output_dir \
--save_steps 1000
```
It is important to say that this models converges much faster than other ones. So, it is also cheap to fine-tune.
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **82.33** |
| **F1** | **89.64** |
| **Size**| **94 MB** |
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/mobilebert-uncased-finetuned-squadv1')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'Who did identified it ?'
})
# Output: {'answer': 'scientists.', 'end': 106, 'score': 0.7885545492172241, 'start': 96}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/t5-small-finetuned-common_gen | ac76a88701e5b6b8af75be9603ee26f4b361fe2f | 2021-06-23T13:08:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-common_gen | 9 | null | transformers | 12,376 | Entry not found |
muirkat/tolkien-mythopoeic-gen | 6d4d2b3fd8c70619fc0311436b9bef512dda5691 | 2021-09-17T21:28:53.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | muirkat | null | muirkat/tolkien-mythopoeic-gen | 9 | null | transformers | 12,377 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: tolkien-mythopoeic-gen
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tolkien-mythopoeic-gen
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on Tolkien's mythopoeic works, namely The Silmarillion and Unfinished Tales of Numenor and Middle Earth.
It achieves the following results on the evaluation set:
- Loss: 3.5110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5732 | 1.0 | 145 | 3.5110 |
| 3.5713 | 2.0 | 290 | 3.5110 |
| 3.5718 | 3.0 | 435 | 3.5110 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
nalini2799/CDAC_hindispeechrecognition | 97980c233b96b7d854441ff86a2d628f0b678490 | 2021-12-10T20:20:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:Interspeech 2021",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | nalini2799 | null | nalini2799/CDAC_hindispeechrecognition | 9 | 1 | transformers | 12,378 | ---
language: hi
datasets:
- Interspeech 2021
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Hindi by Nalini Kumari
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hi
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 72.73
---
# Hindi-Speech to Text Model </br>
The primary objective of this project is to develop a speech recognition system for the Hindi language. As there are very few systems available for speech to text in the Hindi language. Therefore it is an attempt to develop a system where a language model is developed using Machine learning libraries for speech-to-text conversion in the Hindi language. |
napsternxg/scibert_scivocab_uncased_ft_tv_SDU21_AI | 879754cce3d67126c42d1f15e70e34e94aa663df | 2021-05-20T01:11:49.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | napsternxg | null | napsternxg/scibert_scivocab_uncased_ft_tv_SDU21_AI | 9 | null | transformers | 12,379 | scibert_scivocab_uncased_ft_tv MLM pretrained on SDU21 Task 1 + 2
|
napsternxg/scibert_scivocab_uncased_tv_SDU21_AI | dfd3d064269982b49d9a74759ca2764955be9c61 | 2021-05-20T01:12:46.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | napsternxg | null | napsternxg/scibert_scivocab_uncased_tv_SDU21_AI | 9 | null | transformers | 12,380 | scibert_scivocab_uncased_tv submission for SDU21 Task 1 AI
|
neuropark/sahajBERT-NCC | 0225852b736f10da3ef073b276ff6d9004473fed | 2021-06-15T12:40:08.000Z | [
"pytorch",
"albert",
"text-classification",
"bn",
"dataset:IndicGlue",
"transformers",
"collaborative",
"bengali",
"SequenceClassification",
"license:apache-2.0"
]
| text-classification | false | neuropark | null | neuropark/sahajBERT-NCC | 9 | 2 | transformers | 12,381 |
---
language: bn
tags:
- collaborative
- bengali
- SequenceClassification
license: apache-2.0
datasets: IndicGlue
metrics:
- Loss
- Accuracy
- Precision
- Recall
widget:
- text: "এশিয়ায় প্রথম দৃষ্টিহীন ব্যক্তির মাউন্ট এভারেস্ট জয়|"
---
# sahajBERT News Article Classification
## Model description
[sahajBERT](https://huggingface.co/neuropark/sahajBERT) fine-tuned for news article classification using the `sna.bn` split of [IndicGlue](https://huggingface.co/datasets/indic_glue).
The model is trained for classifying articles into 5 different classes:
| Label id | Label |
|:--------:|:----:|
|0 | kolkata|
|1 | state|
|2 | national|
|3 | sports|
|4 | entertainment|
|5 | international|
## Intended uses & limitations
#### How to use
You can use this model directly with a pipeline for Sequence Classification:
```python
from transformers import AlbertForSequenceClassification, TextClassificationPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NCC")
# Initialize model
model = AlbertForSequenceClassification.from_pretrained("neuropark/sahajBERT-NCC")
# Initialize pipeline
pipeline = TextClassificationPipeline(tokenizer=tokenizer, model=model)
raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me
output = pipeline(raw_text)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The model was initialized with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT) at step 19519 and trained on the `sna.bn` split of [IndicGlue](https://huggingface.co/datasets/indic_glue).
## Training procedure
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
## Eval results
Loss: 0.2477145493030548
Accuracy: 0.926293408929837
Macro F1: 0.9079785326650756
Recall: 0.926293408929837
Weighted F1: 0.9266428029354202
Macro Precision: 0.9109938492260489
Micro Precision: 0.926293408929837
Weighted Precision: 0.9288535478995414
Macro Recall: 0.9069095007692186
Micro Recall: 0.926293408929837
Weighted Recall: 0.926293408929837
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
|
nlokam/books_to_bots_v.00 | 04ebf8703f36cb5c64775e231b83664475d1eecd | 2021-12-02T21:51:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | nlokam | null | nlokam/books_to_bots_v.00 | 9 | null | transformers | 12,382 | ---
tags:
- conversational
---
# Books to Bots V.00 |
nreimers/MiniLMv2-L6-H768-distilled-from-BERT-Base | cc69d2110175a18be978f1d52fb45b407576e797 | 2021-06-20T19:02:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | nreimers | null | nreimers/MiniLMv2-L6-H768-distilled-from-BERT-Base | 9 | null | transformers | 12,383 | # MiniLMv2
This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm) |
nyu-mll/roberta-base-100M-1 | cc1cedeecf92d5877c2c9a84e4a61d499e86c813 | 2021-05-20T18:53:55.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | nyu-mll | null | nyu-mll/roberta-base-100M-1 | 9 | null | transformers | 12,384 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
o2poi/sst2-eda-bert | 1127bb45d99ffba3f0f4516d9756e3f54b5a9480 | 2021-06-11T13:00:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | o2poi | null | o2poi/sst2-eda-bert | 9 | null | transformers | 12,385 | Entry not found |
oandreae/financial_sentiment_model | 2c1f91cf2b66d4670cb807d473e0635d042170c6 | 2022-01-20T20:00:01.000Z | [
"pytorch",
"tensorboard",
"perceiver",
"text-classification",
"dataset:financial_phrasebank",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | oandreae | null | oandreae/financial_sentiment_model | 9 | 1 | transformers | 12,386 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- recall
- accuracy
- precision
model-index:
- name: financial_sentiment_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_50agree
metrics:
- name: Recall
type: recall
value: 0.8839956357328868
- name: Accuracy
type: accuracy
value: 0.8804123711340206
- name: Precision
type: precision
value: 0.8604175202419276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# financial_sentiment_model
This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3467
- Recall: 0.8840
- Accuracy: 0.8804
- Precision: 0.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.4481 | 1.0 | 273 | 0.4035 | 0.8526 | 0.8433 | 0.7955 |
| 0.4069 | 2.0 | 546 | 0.4478 | 0.8683 | 0.8289 | 0.8123 |
| 0.2225 | 3.0 | 819 | 0.3167 | 0.8747 | 0.8680 | 0.8387 |
| 0.1245 | 4.0 | 1092 | 0.3467 | 0.8840 | 0.8804 | 0.8604 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16 | 33828469971906e457488ace2957ff18182a4bb1 | 2020-12-11T21:59:26.000Z | [
"pytorch",
"encoder_decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | patrickvonplaten | null | patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16 | 9 | null | transformers | 12,387 | # Shared Roberta2Roberta Summarization with 🤗 EncoderDecoder Framework
This model is a shared Roberta2Roberta model, meaning that the encoder and decoder weights are tied, fine-tuned on summarization.
Roberta2Roberta is a `EncoderDecoderModel`, meaning that both the encoder and the decoder are `roberta-base`
RoBERTa models. In this setup the encoder and decoder weights are tied. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the
two pretrained models can simply be loaded into the framework via:
```python
roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "roberta-base", tie_encoder_decoder=True)
```
The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal
masking for auto-regressiv generation.
Thus, ``roberta2roberta`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model
`roberta2roberta-share-cnn_dailymail-fp16` is uploaded here.
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable summarization results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 EncoderDecoder Framework.
The model can be used as follows:
```python
from transformers import RobertaTokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16")
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David B
oren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 185
6, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confede
rate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking fu
ll membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on t
he fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more invol
ved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members al
legedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a frat
ernity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,
' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloy
d's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing in
cidents."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
# should produce
# SAE's national chapter suspended after video shows party-bound fraternity members singing racist chant. University of Oklahoma president says university's affiliation with fraternity is permanently done.
# SAE has had to close 12 chapters since 2010 after members were killed in hazing. The fraternity has had more than 130 chapters in 18 months.
```
## Training script:
**IMPORTANT**: In order for this code to work, make sure you checkout to the branch
[more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts
the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840.
The following code shows the complete training script that was used to fine-tune `roberta2roberta-cnn_dailymail-fp16
` for reproducability. The training last ~9h on a standard GPU.
```python
#!/usr/bin/env python3
import nlp
import logging
from transformers import RobertaTokenizer, EncoderDecoderModel, Trainer, TrainingArguments
logging.basicConfig(level=logging.INFO)
model = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "roberta-base", tie_encoder_decoder=True)
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
# load train and validation data
train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train")
val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]")
# load rouge for validation
rouge = nlp.load_metric("rouge", experiment_id=0)
# set decoding params
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 142
model.config.min_length = 56
model.config.no_repeat_ngram_size = 3
model.early_stopping = True
model.length_penalty = 2.0
model.num_beams = 4
encoder_length = 512
decoder_length = 128
batch_size = 16
# map data correctly
def map_to_encoder_decoder_inputs(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at Longformer at 2048
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length)
# force summarization <= 256
outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["labels"] = outputs.input_ids.copy()
# mask loss for padding
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
]
batch["decoder_attention_mask"] = outputs.attention_mask
assert all([len(x) == encoder_length for x in inputs.input_ids])
assert all([len(x) == decoder_length for x in outputs.input_ids])
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
# all unnecessary tokens are removed
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.eos_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
# make train dataset ready
train_dataset = train_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
train_dataset.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_attention_mask", "decoder_input_ids", "labels"],
)
# same for validation dataset
val_dataset = val_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
val_dataset.set_format(
type="torch", columns=["input_ids", "decoder_attention_mask", "attention_mask", "decoder_input_ids", "labels"],
)
# set training arguments - these params are not really tuned, feel free to change
training_args = TrainingArguments(
output_dir="./",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_from_generate=True,
evaluate_during_training=True,
do_train=True,
do_eval=True,
logging_steps=1000,
save_steps=1000,
eval_steps=1000,
overwrite_output_dir=True,
warmup_steps=2000,
save_total_limit=3,
fp16=True,
)
# instantiate trainer
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
# start training
trainer.train()
```
## Evaluation
The following script evaluates the model on the test set of
CNN/Daily Mail.
```python
#!/usr/bin/env python3
import nlp
from transformers import RobertaTokenizer, EncoderDecoderModel
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16")
model.to("cuda")
test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test")
batch_size = 128
# map data correctly
def generate_summary(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at BERT max length 512
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
outputs = model.generate(input_ids, attention_mask=attention_mask)
# all special tokens including will be removed
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
batch["pred"] = output_str
return batch
results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"])
# load rouge for validation
rouge = nlp.load_metric("rouge")
pred_str = results["pred"]
label_str = results["highlights"]
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
print(rouge_output)
```
The obtained results should be:
| - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure |
|----------|:-------------:|:------:|:------:|
| **CNN/Daily Mail** | 15.6 | 18.79 | **16.59** |
|
pchanda/pretrained-smiles-pubchem10m | 3f5c603606a89f07c080f8dd564419f8814a161e | 2021-05-20T13:01:15.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | pchanda | null | pchanda/pretrained-smiles-pubchem10m | 9 | null | transformers | 12,388 | model pretrained on 10m smiles from pubchem.
|
philschmid/MiniLMv2-L6-H384-emotion | 96350ab4dfcc93b17a7759e1ab53dd73db2a5589 | 2021-12-06T19:59:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | philschmid | null | philschmid/MiniLMv2-L6-H384-emotion | 9 | null | transformers | 12,389 | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: MiniLMv2-L6-H384-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-emotion
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2140
- Accuracy: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.432 | 1.0 | 500 | 0.9992 | 0.6805 |
| 0.8073 | 2.0 | 1000 | 0.5437 | 0.846 |
| 0.4483 | 3.0 | 1500 | 0.3018 | 0.909 |
| 0.2833 | 4.0 | 2000 | 0.2412 | 0.915 |
| 0.2169 | 5.0 | 2500 | 0.2140 | 0.9215 |
| 0.1821 | 6.0 | 3000 | 0.2159 | 0.917 |
| 0.154 | 7.0 | 3500 | 0.2084 | 0.919 |
| 0.1461 | 8.0 | 4000 | 0.2047 | 0.92 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
pierric/autonlp-my-own-imdb-sentiment-analysis-2131817 | fd725976967405236012b2804c686927764ac889 | 2021-06-29T22:08:35.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:pierric/autonlp-data-my-own-imdb-sentiment-analysis",
"transformers",
"autonlp"
]
| text-classification | false | pierric | null | pierric/autonlp-my-own-imdb-sentiment-analysis-2131817 | 9 | null | transformers | 12,390 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- pierric/autonlp-data-my-own-imdb-sentiment-analysis
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2131817
## Validation Metrics
- Loss: 0.24430708587169647
- Accuracy: 0.9452
- Precision: 0.9303944315545244
- Recall: 0.9624
- AUC: 0.9793824287999999
- F1: 0.946126622099882
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/pierric/autonlp-my-own-imdb-sentiment-analysis-2131817
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pierric/autonlp-my-own-imdb-sentiment-analysis-2131817", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pierric/autonlp-my-own-imdb-sentiment-analysis-2131817", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
pitehu/T5_NER_CONLL_ENTITYREPLACE | 13fff880ad3a63581e6bebc7d962ef0907dd4d84 | 2022-01-28T11:05:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:CoNLL-2003",
"arxiv:2111.10952",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | pitehu | null | pitehu/T5_NER_CONLL_ENTITYREPLACE | 9 | null | transformers | 12,391 |
---
language:
- en
license: "apache-2.0"
datasets:
- CoNLL-2003
metrics:
- F1
---
This is a T5 small model finetuned on CoNLL-2003 dataset for named entity recognition (NER).
Example Input and Output:
“Recognize all the named entities in this sequence (replace named entities with one of [PER], [ORG], [LOC], [MISC]): When Alice visited New York” → “When PER visited LOC LOC"
Evaluation Result:
% of match (for comparison with ExT5: https://arxiv.org/pdf/2111.10952.pdf):
| Model| ExT5_{Base} | This Model | T5_NER_CONLL_OUTPUTLIST
| :---: | :---: | :---: | :---: |
| % of Complete Match| 86.53 | 79.03 | TBA|
There are some outputs (212/3453 or 6.14% that does not have the same length as the input)
F1 score on testing set of those with matching length :
| Model | This Model | T5_NER_CONLL_OUTPUTLIST | BERTbase
| :---: | :---: | :---: | :---: |
| F1| 0.8901 | 0.8691| 0.9240
**Caveat: The testing set of these aren't the same, due to matching length issue...
T5_NER_CONLL_OUTPUTLIST only has 27/3453 missing length (only 0.78%); The BERT number is directly from their paper (https://arxiv.org/pdf/1810.04805.pdf)
|
psychicautomaton/bert-base-uncased-finetuned-suicide | 56962716fd988bc02459a08ad6ce0065da998fcc | 2021-12-21T23:48:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | psychicautomaton | null | psychicautomaton/bert-base-uncased-finetuned-suicide | 9 | null | transformers | 12,392 | Entry not found |
rafagudinov/en_rent_estate_ads | f5bbec1774ddc1b0f2625b82312b407b331e2355 | 2021-11-29T17:24:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | rafagudinov | null | rafagudinov/en_rent_estate_ads | 9 | null | transformers | 12,393 | Entry not found |
ramybaly/ner_nerd | a988f226e22e9b17176de5a384bab85a778eb12b | 2021-08-07T04:20:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:nerd",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | ramybaly | null | ramybaly/ner_nerd | 9 | null | transformers | 12,394 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- nerd
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: ner_nerd
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: nerd
type: nerd
args: nerd
metric:
name: Accuracy
type: accuracy
value: 0.9391592461061087
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_nerd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the nerd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2245
- Precision: 0.7466
- Recall: 0.7873
- F1: 0.7664
- Accuracy: 0.9392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2843 | 1.0 | 8235 | 0.1951 | 0.7352 | 0.7824 | 0.7580 | 0.9375 |
| 0.1655 | 2.0 | 16470 | 0.1928 | 0.7519 | 0.7827 | 0.7670 | 0.9398 |
| 0.1216 | 3.0 | 24705 | 0.2119 | 0.75 | 0.7876 | 0.7684 | 0.9396 |
| 0.0881 | 4.0 | 32940 | 0.2258 | 0.7515 | 0.7896 | 0.7701 | 0.9392 |
| 0.0652 | 5.0 | 41175 | 0.2564 | 0.7518 | 0.7875 | 0.7692 | 0.9387 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.2
|
rsvp-AI-ca/segabert-uncased-base-50k | a29f7c6d6a08c418a6d191a03533b848e2073930 | 2020-12-13T03:04:25.000Z | [
"pytorch",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | rsvp-AI-ca | null | rsvp-AI-ca/segabert-uncased-base-50k | 9 | null | transformers | 12,395 | Entry not found |
ruiqi-zhong/roberta-large-meta-tuning-test | ce6b194ba8ecd09a8a7cd757d7eb14c7799ca366 | 2021-09-14T03:24:59.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | ruiqi-zhong | null | ruiqi-zhong/roberta-large-meta-tuning-test | 9 | null | transformers | 12,396 | Entry not found |
saattrupdan/xlmr-base-texas-squad-da | f69bdb83d3537d0404f471f8cd7b23757b6ba3b1 | 2022-02-05T15:55:52.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"da",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | saattrupdan | null | saattrupdan/xlmr-base-texas-squad-da | 9 | null | transformers | 12,397 | ---
language:
- da
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr-base-texas-squad-da
results: []
widget:
- text: "Hvem handler artiklen om?"
context: "Forfatter og musiker Flemming Quist Møller er død i en alder af 79 år. Den folkekære kunstner faldt om ved morgenbordet med en blodprop i hjertet i mandags. Det kunne forfatterens søn, Carl Quist-Møller, bekræfte over for TV 2 Lorry.- Han faldt om i det hus i Taarbæk, hvor han er vokset op og også har boet de sidste år af sit liv. Han blev lagt i koma på Rigshospitalet. Her har vi siddet omkring ham i en uge, siger Carl Quist-Møller til mediet.MindeordI mange år var Flemming Quist Møller en del af bandet Bazaar sammen med Peter Bastian, Anders Koppel og Mehmet Ozan.Anders Koppel er tydeligt rørt over vennens død, da Ekstra Bladet rækker ud til ham mandag aften.- Det er en stor del af mit liv, der er forsvundet med Flemmings liv, det er klart. Vi har spillet sammen i 37 år, siger han og fortsætter:- Jeg vil mest huske ham for hans ukonventionelle tilgang til alting. Flemming havde et meget stærkt blik for det autentiske og ærlige. Han var ikke bundet af normer -tværtimod, hvis han så en norm, hvor noget skulle gøres på en bestemt måde, så flygtede han eller prøvede at springe det i stumper og stykker.Ifølge den danske musiker og komponist er netop følgende ord rammende for Flemming Quist Møller: Original, vidende, kompromisløs og humoristisk."
---
# TExAS-SQuAD-da
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-da dataset.
It achieves the following results on the evaluation set:
- Exact match: 63.96%
- F1-score: 68.40%
In comparison, the `jacobshein/danish-bert-botxo-qa-squad` model achieves 30.37% EM and 37.15% F1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6438 | 1.0 | 4183 | 1.4711 |
| 1.4079 | 2.0 | 8366 | 1.4356 |
| 1.2532 | 3.0 | 12549 | 1.4509 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
saraks/cuad-distil-parties-dates-law-08-25-clean-context1 | 8c69a44d54e301866cb26adb52fad70b662881af | 2021-08-25T10:00:44.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | saraks | null | saraks/cuad-distil-parties-dates-law-08-25-clean-context1 | 9 | null | transformers | 12,398 | Entry not found |
savasy/bert-turkish-uncased-qnli | e72cb2ce635b33c5f1ca4b1cf1c18d925f3f3074 | 2021-05-20T04:57:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | savasy | null | savasy/bert-turkish-uncased-qnli | 9 | null | transformers | 12,399 |
# Turkish QNLI Model
I fine-tuned Turkish-Bert-Model for Question-Answering problem with Turkish version of SQuAD; TQuAD
https://huggingface.co/dbmdz/bert-base-turkish-uncased
# Data: TQuAD
I used following TQuAD data set
https://github.com/TQuad/turkish-nlp-qa-dataset
I convert the dataset into transformers glue data format of QNLI by the following script
SQuAD -> QNLI
```
import argparse
import collections
import json
import numpy as np
import os
import re
import string
import sys
ff="dev-v0.1.json"
ff="train-v0.1.json"
dataset=json.load(open(ff))
i=0
for article in dataset['data']:
title= article['title']
for p in article['paragraphs']:
context= p['context']
for qa in p['qas']:
answer= qa['answers'][0]['text']
all_other_answers= list(set([e['answers'][0]['text'] for e in p['qas']]))
all_other_answers.remove(answer)
i=i+1
print(i,qa['question'].replace(";",":") , answer.replace(";",":"),"entailment", sep="\t")
for other in all_other_answers:
i=i+1
print(i,qa['question'].replace(";",":") , other.replace(";",":"),"not_entailment" ,sep="\t")
```
Under QNLI folder there are dev and test test
Training data looks like
> 613 II.Friedrich’in bilginler arasındaki en önemli şahsiyet olarak belirttiği kişi kimdir? filozof, kimyacı, astrolog ve çevirmen not_entailment
> 614 II.Friedrich’in bilginler arasındaki en önemli şahsiyet olarak belirttiği kişi kimdir? kişisel eğilimi ve özel temaslar nedeniyle not_entailment
> 615 Michael Scotus’un mesleği nedir? filozof, kimyacı, astrolog ve çevirmen entailment
> 616 Michael Scotus’un mesleği nedir? Palermo’ya not_entailment
# Training
Training the model with following environment
```
export GLUE_DIR=./glue/glue_dataTR/QNLI
export TASK_NAME=QNLI
```
```
python3 run_glue.py \
--model_type bert \
--model_name_or_path dbmdz/bert-base-turkish-uncased\
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR \
--max_seq_length 128 \
--per_gpu_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
# Evaluation Results
==
| acc | 0.9124060613527165
| loss| 0.21582801340189717
==
> See all my model
> https://huggingface.co/savasy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.