modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/unkledell | 28acedc0d1194e65aaac4e547a81bb4ed4040b1b | 2021-07-26T13:48:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/unkledell | 0 | null | transformers | 35,300 | ---
language: en
thumbnail: https://www.huggingtweets.com/unkledell/1627307332006/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1199452477659238400/iMdWeVWZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">freeuzi</div>
<div style="text-align: center; font-size: 14px;">@unkledell</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from freeuzi.
| Data | freeuzi |
| --- | --- |
| Tweets downloaded | 3220 |
| Retweets | 138 |
| Short tweets | 1159 |
| Tweets kept | 1923 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ockzquq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @unkledell's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17ij2gx7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17ij2gx7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/unkledell')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/unlikelyvee | 824b963d01f9ddebbda3613fa56426dfbfec9d70 | 2021-05-23T03:23:33.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/unlikelyvee | 0 | null | transformers | 35,301 | ---
language: en
thumbnail: https://www.huggingtweets.com/unlikelyvee/1618115531130/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1379120795838861315/V7kEQ63C_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Veronica 🤖 AI Bot </div>
<div style="font-size: 15px">@unlikelyvee bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@unlikelyvee's tweets](https://twitter.com/unlikelyvee).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 466 |
| Short tweets | 951 |
| Tweets kept | 1816 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1df8eimg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @unlikelyvee's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o1p2kqo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o1p2kqo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/unlikelyvee')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/urmomlolroasted | d53beafc428ed0aa2e41a61ee0a8ede45f674664 | 2021-08-25T14:06:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/urmomlolroasted | 0 | null | transformers | 35,302 | ---
language: en
thumbnail: https://www.huggingtweets.com/urmomlolroasted/1629900362212/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1365680527307595778/V2TENQA-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">anna!!!!!</div>
<div style="text-align: center; font-size: 14px;">@urmomlolroasted</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from anna!!!!!.
| Data | anna!!!!! |
| --- | --- |
| Tweets downloaded | 3192 |
| Retweets | 477 |
| Short tweets | 700 |
| Tweets kept | 2015 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1s1eoov7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @urmomlolroasted's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36442rcs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36442rcs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/urmomlolroasted')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/usethespacebar | 299fc8becd3a0f81ec11592dcec63fe997ff4366 | 2021-05-23T03:28:33.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/usethespacebar | 0 | null | transformers | 35,303 | ---
language: en
thumbnail: https://www.huggingtweets.com/usethespacebar/1602235570438/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/599315492617093121/aPVlt3QL_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">🧠 Grant R. Vousden-Dishington 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@usethespacebar bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@usethespacebar's tweets](https://twitter.com/usethespacebar).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3178</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1289</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>91</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1798</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2nkbp17t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usethespacebar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/1rogclc0) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/1rogclc0/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/usethespacebar'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/uspto | 2588204ad6ed3dad7a1c49320230dee8b918680e | 2021-05-23T03:29:42.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/uspto | 0 | null | transformers | 35,304 | ---
language: en
thumbnail: https://www.huggingtweets.com/uspto/1602172206975/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/684014734937559040/WhfNiuZx_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">USPTO 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@uspto bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@uspto's tweets](https://twitter.com/uspto).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3211</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>530</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>8</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2673</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1e0afur2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @uspto's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2pwxi69v) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2pwxi69v/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/uspto'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/vansianmagic | 59de846c30628921a7081a42a0e6024e0beb28a2 | 2021-05-23T03:33:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vansianmagic | 0 | null | transformers | 35,305 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1304024678738964480/XaiyULdl_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Vanse 🤖 AI Bot </div>
<div style="font-size: 15px">@vansianmagic bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@vansianmagic's tweets](https://twitter.com/vansianmagic).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 381 |
| Short tweets | 406 |
| Tweets kept | 2446 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ev1v2vv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vansianmagic's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2trfqi5g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2trfqi5g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vansianmagic')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vaushv | 5b39cd708efaf34c50304897c7103c6a69f2469e | 2021-06-29T21:45:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vaushv | 0 | null | transformers | 35,306 | ---
language: en
thumbnail: https://www.huggingtweets.com/vaushv/1625003147753/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1191140885250985985/UhfX1EdI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Vaush</div>
<div style="text-align: center; font-size: 14px;">@vaushv</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Vaush.
| Data | Vaush |
| --- | --- |
| Tweets downloaded | 3227 |
| Retweets | 831 |
| Short tweets | 469 |
| Tweets kept | 1927 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2sbjngaz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vaushv's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34fpf998) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34fpf998/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vaushv')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vccircle | 486de48116a4029399ea1432dc3c2420828d09ba | 2021-05-23T03:34:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vccircle | 0 | null | transformers | 35,307 | ---
language: en
thumbnail: https://www.huggingtweets.com/vccircle/1609942748755/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1103554885549666304/PzYNIvur_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">VCCircle 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@vccircle bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@vccircle's tweets](https://twitter.com/vccircle).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3236</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>68</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>9</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3159</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ruq22ufk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vccircle's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/whr2irtk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/whr2irtk/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/vccircle'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vecuroniyum | 5c91854cd9451f93aa5c77c6c74603941f27157a | 2021-05-23T03:35:17.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vecuroniyum | 0 | null | transformers | 35,308 | ---
language: en
thumbnail: https://www.huggingtweets.com/vecuroniyum/1618258091681/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1353296744096931841/HGGz6U1F_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">REBOA 🤖 AI Bot </div>
<div style="font-size: 15px">@vecuroniyum bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@vecuroniyum's tweets](https://twitter.com/vecuroniyum).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 312 |
| Retweets | 25 |
| Short tweets | 56 |
| Tweets kept | 231 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bvb2kz0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vecuroniyum's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2l7aj3oi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2l7aj3oi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vecuroniyum')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vendittilab | 3f7c0c140b93b03f3e077e2f34a4685bbbad1557 | 2021-05-23T03:36:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vendittilab | 0 | null | transformers | 35,309 | ---
language: en
thumbnail: https://www.huggingtweets.com/vendittilab/1617250632372/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1285530141758652418/e699399l_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jeremy Venditti 🤖 AI Bot </div>
<div style="font-size: 15px">@vendittilab bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@vendittilab's tweets](https://twitter.com/vendittilab).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 190 |
| Retweets | 62 |
| Short tweets | 12 |
| Tweets kept | 116 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19ejg168/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vendittilab's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/11m1r0av) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/11m1r0av/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vendittilab')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vennesports | 9b5c10f695bda89dee6bf13611417aefbc10b014 | 2021-05-23T03:40:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vennesports | 0 | null | transformers | 35,310 | ---
language: en
thumbnail: https://www.huggingtweets.com/vennesports/1614525364767/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1348717540055285761/b64uTQVw_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">VENN ESPORTS 🤖 AI Bot </div>
<div style="font-size: 15px">@vennesports bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@vennesports's tweets](https://twitter.com/vennesports).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 107 |
| Retweets | 16 |
| Short tweets | 22 |
| Tweets kept | 69 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wx6lrcs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vennesports's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1fch18xq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1fch18xq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vennesports')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vermontsmash | b8e488f7af4678ec8ebeeba23f4918f94548d992 | 2021-05-23T03:47:33.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vermontsmash | 0 | null | transformers | 35,311 | ---
language: en
thumbnail: https://www.huggingtweets.com/vermontsmash/1615171689234/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1186545353388101632/yGveN2N3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Vermont Smash Ultimate 🤖 AI Bot </div>
<div style="font-size: 15px">@vermontsmash bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@vermontsmash's tweets](https://twitter.com/vermontsmash).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 161 |
| Retweets | 22 |
| Short tweets | 13 |
| Tweets kept | 126 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jy80fph/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vermontsmash's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/14r2owgg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/14r2owgg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vermontsmash')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/veryshortstory | a83f645c555e43dc7b6eafb90789e94477231a48 | 2021-05-23T03:48:43.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/veryshortstory | 0 | null | transformers | 35,312 | ---
language: en
thumbnail: https://www.huggingtweets.com/veryshortstory/1617111091869/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/141070636/42291_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Very Short Story 🤖 AI Bot </div>
<div style="font-size: 15px">@veryshortstory bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@veryshortstory's tweets](https://twitter.com/veryshortstory).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2223 |
| Retweets | 9 |
| Short tweets | 1 |
| Tweets kept | 2213 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2odcxxfj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @veryshortstory's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q00lb2z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q00lb2z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/veryshortstory')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vfahegao | 13ee570d34bf00a175cb5fe077758df2612ac9be | 2021-07-23T17:11:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vfahegao | 0 | null | transformers | 35,313 | ---
language: en
thumbnail: https://www.huggingtweets.com/vfahegao/1627060115826/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1398475970210828290/2kCvFheG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Eye of the Devil Starring Sharon Tate 🍿🍿🍿🍿🍿🕊</div>
<div style="text-align: center; font-size: 14px;">@vfahegao</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Eye of the Devil Starring Sharon Tate 🍿🍿🍿🍿🍿🕊.
| Data | Eye of the Devil Starring Sharon Tate 🍿🍿🍿🍿🍿🕊 |
| --- | --- |
| Tweets downloaded | 421 |
| Retweets | 68 |
| Short tweets | 104 |
| Tweets kept | 249 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lu2ve0b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vfahegao's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tnz0xod) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tnz0xod/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vfahegao')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vikjapan | c2a4e1a5fe56fde74bc358cd4dffd8d0a3dc6ce8 | 2021-05-23T03:52:28.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vikjapan | 0 | null | transformers | 35,314 | ---
language: en
thumbnail: https://www.huggingtweets.com/vikjapan/1613790649780/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1348312846954758144/X9G_wGk4_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">びっく@ポジティブに日本を良くしたい🦄 🤖 AI Bot </div>
<div style="font-size: 15px">@vikjapan bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@vikjapan's tweets](https://twitter.com/vikjapan).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 897 |
| Retweets | 14 |
| Short tweets | 464 |
| Tweets kept | 419 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/u2nd957y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vikjapan's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15ytgwxd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15ytgwxd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vikjapan')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/viktar_babaryka | 8d856b8ced175da37be76c85f54bd14e4ffafadd | 2021-07-16T15:48:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/viktar_babaryka | 0 | null | transformers | 35,315 | ---
language: en
thumbnail: https://www.huggingtweets.com/viktar_babaryka/1626450488055/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1330073417048010752/kh1pK808_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Виктор Бабарико</div>
<div style="text-align: center; font-size: 14px;">@viktar_babaryka</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Виктор Бабарико.
| Data | Виктор Бабарико |
| --- | --- |
| Tweets downloaded | 1254 |
| Retweets | 29 |
| Short tweets | 64 |
| Tweets kept | 1161 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d99evm6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @viktar_babaryka's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q2axvfy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q2axvfy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/viktar_babaryka')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/violet_tarot | ce804127ab900b3ceece639323473638ff8f8c7e | 2022-02-16T19:46:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/violet_tarot | 0 | null | transformers | 35,316 | ---
language: en
thumbnail: http://www.huggingtweets.com/violet_tarot/1645040764145/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1313222795195805700/R33G2bpP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Violet Witch 🐍</div>
<div style="text-align: center; font-size: 14px;">@violet_tarot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Violet Witch 🐍.
| Data | Violet Witch 🐍 |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 13 |
| Short tweets | 238 |
| Tweets kept | 2999 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3p74mq96/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @violet_tarot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/och1losv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/och1losv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/violet_tarot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/violetgweny | 1ff782b5a69c7f6ecafdefdd0f74dc40028ab5b2 | 2021-05-23T03:54:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/violetgweny | 0 | null | transformers | 35,317 | ---
language: en
thumbnail: https://www.huggingtweets.com/violetgweny/1616644773439/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1359993854871625730/TyFYsZsn_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gweny 🤖 AI Bot </div>
<div style="font-size: 15px">@violetgweny bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@violetgweny's tweets](https://twitter.com/violetgweny).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 384 |
| Short tweets | 160 |
| Tweets kept | 2697 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zc0zpn8e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @violetgweny's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1l4ggliv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1l4ggliv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/violetgweny')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/viperwave | a96695b85ad59cab8ee0e4968cdd4b2b09dd72ac | 2021-05-23T03:56:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/viperwave | 0 | null | transformers | 35,318 | ---
language: en
thumbnail: https://www.huggingtweets.com/viperwave/1617765814112/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1312590774912000000/6E6Ry8aJ_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Travis McElroy "Getting Sexual" Apology Video 🤖 AI Bot </div>
<div style="font-size: 15px">@viperwave bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@viperwave's tweets](https://twitter.com/viperwave).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3211 |
| Retweets | 980 |
| Short tweets | 623 |
| Tweets kept | 1608 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34nyqo6m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @viperwave's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/184tfonm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/184tfonm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/viperwave')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/voteblake | 5f9b1b754ce7105c8905bf41b45d64274dd43c07 | 2021-05-23T04:02:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/voteblake | 0 | null | transformers | 35,319 | ---
language: en
thumbnail: https://www.huggingtweets.com/voteblake/1617899664019/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/948045482835681280/O3hp8__2_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">blake box emoji 🤖 AI Bot </div>
<div style="font-size: 15px">@voteblake bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@voteblake's tweets](https://twitter.com/voteblake).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 208 |
| Short tweets | 379 |
| Tweets kept | 2657 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/z2oherue/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @voteblake's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19n7fmfp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19n7fmfp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/voteblake')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vtribbean | 5f9559a530b841ea2ad55c4a944dfb76ee78fc92 | 2021-05-23T04:05:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vtribbean | 0 | null | transformers | 35,320 | ---
language: en
thumbnail: https://www.huggingtweets.com/vtribbean/1620678525750/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1370572034053398529/KTwzI1eg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">💙 TRIBS 👽 VTuber 💙</div>
<div style="text-align: center; font-size: 14px;">@vtribbean</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 💙 TRIBS 👽 VTuber 💙.
| Data | 💙 TRIBS 👽 VTuber 💙 |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 578 |
| Short tweets | 531 |
| Tweets kept | 2134 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/223pztng/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vtribbean's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hme5b03) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hme5b03/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vtribbean')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/w3disd3ad | 9a445ceabfcb32706edda7603623462f42a9dc96 | 2021-05-23T04:09:00.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/w3disd3ad | 0 | null | transformers | 35,321 | ---
language: en
thumbnail: https://www.huggingtweets.com/w3disd3ad/1614105152162/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1357939184275623936/hhNsWFNY_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">w3dnesday 🤖 AI Bot </div>
<div style="font-size: 15px">@w3disd3ad bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@w3disd3ad's tweets](https://twitter.com/w3disd3ad).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 644 |
| Retweets | 208 |
| Short tweets | 95 |
| Tweets kept | 341 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2c3emieq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @w3disd3ad's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3jtauo7r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3jtauo7r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/w3disd3ad')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/washed_u | d9ae475c6b1458c8aea290144a2eef7cd9df769f | 2021-05-23T04:13:27.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/washed_u | 0 | null | transformers | 35,322 | ---
language: en
thumbnail: https://www.huggingtweets.com/washed_u/1616771910952/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1333096794826412033/ZuZpbXkU_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">honcho⛰🌵🦎 🤖 AI Bot </div>
<div style="font-size: 15px">@washed_u bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@washed_u's tweets](https://twitter.com/washed_u).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1250 |
| Retweets | 102 |
| Short tweets | 124 |
| Tweets kept | 1024 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qt0xkbfn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @washed_u's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3sm2zrqc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3sm2zrqc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/washed_u')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/weights_biases | a5fafeb53368da64c0a05b46b1a73360259abb69 | 2021-05-23T04:16:42.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/weights_biases | 0 | null | transformers | 35,323 | ---
language: en
thumbnail: https://www.huggingtweets.com/weights_biases/1607115678433/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1101567690014056448/XxZqLgwb_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Weights & Biases 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@weights_biases bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@weights_biases's tweets](https://twitter.com/weights_biases).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>1206</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>516</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>10</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>680</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wvlor6a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @weights_biases's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p3knp6q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p3knp6q/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/weights_biases'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/wellshit0 | 3ebfaddf82100efc395b24966d4634b7dcef9e70 | 2022-02-18T02:12:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wellshit0 | 0 | null | transformers | 35,324 | ---
language: en
thumbnail: http://www.huggingtweets.com/wellshit0/1645150325840/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1490480138752036868/de5inQAA_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">well shit</div>
<div style="text-align: center; font-size: 14px;">@wellshit0</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from well shit.
| Data | well shit |
| --- | --- |
| Tweets downloaded | 3155 |
| Retweets | 319 |
| Short tweets | 768 |
| Tweets kept | 2068 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1a1dkcwv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wellshit0's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b61bmmt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b61bmmt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wellshit0')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/whiskyhutch | fded390bacbac1bcb4e17f2ef38b637703d77641 | 2021-05-23T04:24:26.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/whiskyhutch | 0 | null | transformers | 35,325 | ---
language: en
thumbnail: https://www.huggingtweets.com/whiskyhutch/1617815806661/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1252749409126772738/POVl38T0_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Sterling 🐝 🤖 AI Bot </div>
<div style="font-size: 15px">@whiskyhutch bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@whiskyhutch's tweets](https://twitter.com/whiskyhutch).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2893 |
| Retweets | 1516 |
| Short tweets | 345 |
| Tweets kept | 1032 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/24q4cf2m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @whiskyhutch's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rv5i0hc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rv5i0hc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/whiskyhutch')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/whoops2gay | 049455afa750ac58bcc0927eec44c85186a3ca6b | 2021-05-23T04:25:29.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/whoops2gay | 0 | null | transformers | 35,326 | ---
language: en
thumbnail: https://www.huggingtweets.com/whoops2gay/1617793279871/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1216857777236135936/YKTnrYXf_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">whoops im gay lol 🤖 AI Bot </div>
<div style="font-size: 15px">@whoops2gay bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@whoops2gay's tweets](https://twitter.com/whoops2gay).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3206 |
| Retweets | 2434 |
| Short tweets | 190 |
| Tweets kept | 582 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jpc0yop/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @whoops2gay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3iq56t6c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3iq56t6c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/whoops2gay')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wife_geist | 8695e0efa25f0811247486d857d10f4c61f0a4c7 | 2021-05-23T04:26:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wife_geist | 0 | null | transformers | 35,327 | ---
language: en
thumbnail: https://www.huggingtweets.com/wife_geist/1616642371132/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1320806229413908483/qURj8zLe_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">wife Geist 🤖 AI Bot </div>
<div style="font-size: 15px">@wife_geist bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@wife_geist's tweets](https://twitter.com/wife_geist).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 242 |
| Short tweets | 301 |
| Tweets kept | 2701 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dj9mi0d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wife_geist's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/k0rfkwp0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/k0rfkwp0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wife_geist')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wiifactsplus | fd705704cb6fe3505b2018e61b757fb8031821e9 | 2021-05-23T04:27:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wiifactsplus | 0 | null | transformers | 35,328 | ---
language: en
thumbnail: https://www.huggingtweets.com/wiifactsplus/1614218679072/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1336892376963567616/-La5SswS_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Wii Facts Plus 🤖 AI Bot </div>
<div style="font-size: 15px">@wiifactsplus bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@wiifactsplus's tweets](https://twitter.com/wiifactsplus).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 153 |
| Retweets | 5 |
| Short tweets | 0 |
| Tweets kept | 148 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19wweb3v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wiifactsplus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28sh6nv9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28sh6nv9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wiifactsplus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/williamgrobman | fe483c8677a339191b61fa81eadf539825c0b439 | 2021-05-23T04:28:41.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/williamgrobman | 0 | null | transformers | 35,329 | ---
language: en
thumbnail: https://www.huggingtweets.com/williamgrobman/1616735910839/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374605323881779203/c5kjnLQp_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">🔆¯\_(ツ)_/¯❓ 🤖 AI Bot </div>
<div style="font-size: 15px">@williamgrobman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@williamgrobman's tweets](https://twitter.com/williamgrobman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 51 |
| Short tweets | 223 |
| Tweets kept | 2974 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wiaiqgc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @williamgrobman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/phcsbki0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/phcsbki0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/williamgrobman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wilton_quinn | 4abfceadebbadf0a0dbe615b7970c3ecd3ebb7ee | 2022-02-20T01:44:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wilton_quinn | 0 | null | transformers | 35,330 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1485887832346357760/JhrQ8-73_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Quinn Wilton</div>
<div style="text-align: center; font-size: 14px;">@wilton_quinn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Quinn Wilton.
| Data | Quinn Wilton |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 72 |
| Short tweets | 135 |
| Tweets kept | 3038 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1789r49b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wilton_quinn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31qiamqk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31qiamqk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wilton_quinn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/witheredstrings | 1b7997997506e4b70441ae41d7da16ed1e2bd1fa | 2021-11-14T12:58:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/witheredstrings | 0 | null | transformers | 35,331 | ---
language: en
thumbnail: https://www.huggingtweets.com/witheredstrings/1636894685017/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1458448689568092161/qliCU5-F_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">VacuumF</div>
<div style="text-align: center; font-size: 14px;">@witheredstrings</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from VacuumF.
| Data | VacuumF |
| --- | --- |
| Tweets downloaded | 311 |
| Retweets | 10 |
| Short tweets | 15 |
| Tweets kept | 286 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jei3sh4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @witheredstrings's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wwm26hx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wwm26hx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/witheredstrings')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/witten271 | c40013a4713109052e2c087e9da6f428c15a4216 | 2021-05-23T04:32:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/witten271 | 0 | null | transformers | 35,332 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1264681056256565268/lrwZRqIv_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Edward Witten 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@witten271 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@witten271's tweets](https://twitter.com/witten271).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>1337</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>761</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>32</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>544</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/i5o4s13a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @witten271's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/35w4smrg) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/35w4smrg/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/witten271'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wolfniya | cd87cc026fab5881b8ced8de1d59f76237eafc2a | 2021-05-23T04:35:39.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wolfniya | 0 | null | transformers | 35,333 | ---
language: en
thumbnail: https://www.huggingtweets.com/wolfniya/1617790877098/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1357846937840586753/vte8QVom_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Evelyn fra den øya✴⬡✴🦎✴🇬🇧🇳🇴 🤖 AI Bot </div>
<div style="font-size: 15px">@wolfniya bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@wolfniya's tweets](https://twitter.com/wolfniya).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 198 |
| Short tweets | 331 |
| Tweets kept | 2714 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2e1doaxh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wolfniya's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iyc29lq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iyc29lq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wolfniya')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wormonnastring | f3c23242f97fd3b4b75bf0dc5f449b386c0da169 | 2021-08-04T18:51:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wormonnastring | 0 | null | transformers | 35,334 | ---
language: en
thumbnail: https://www.huggingtweets.com/wormonnastring/1628103109378/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1383116056726736906/9efycXzG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">rhys cake</div>
<div style="text-align: center; font-size: 14px;">@wormonnastring</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from rhys cake.
| Data | rhys cake |
| --- | --- |
| Tweets downloaded | 3139 |
| Retweets | 841 |
| Short tweets | 260 |
| Tweets kept | 2038 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1f327axi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wormonnastring's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gmcf7lk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gmcf7lk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wormonnastring')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/worrski_ | 2c1b12e55f22738668a76b30d778db73776062af | 2021-05-23T04:38:29.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/worrski_ | 0 | null | transformers | 35,335 | ---
language: en
thumbnail: https://www.huggingtweets.com/worrski_/1616131395706/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1370683624324956162/R6cB17BK_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">21st Century Schizoid Ben 🤖 AI Bot </div>
<div style="font-size: 15px">@worrski_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@worrski_'s tweets](https://twitter.com/worrski_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3190 |
| Retweets | 531 |
| Short tweets | 856 |
| Tweets kept | 1803 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1g9vvlkk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @worrski_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17oq1dis) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17oq1dis/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/worrski_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/woxxy | 0e86b2ce62df3ef27c2f59ba7987c62698c35140 | 2022-05-20T21:22:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/woxxy | 0 | null | transformers | 35,336 | ---
language: en
thumbnail: http://www.huggingtweets.com/woxxy/1653081762754/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1474540394956394498/6o13ikqu_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Woxxy</div>
<div style="text-align: center; font-size: 14px;">@woxxy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Woxxy.
| Data | Woxxy |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 308 |
| Short tweets | 374 |
| Tweets kept | 2557 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ekkjj88/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @woxxy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3aueqdru) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3aueqdru/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/woxxy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wrathofgnon | b8766c39af8d548c66d040b1cd4517099a0f6963 | 2021-05-23T04:41:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wrathofgnon | 0 | null | transformers | 35,337 | ---
language: en
thumbnail: https://www.huggingtweets.com/wrathofgnon/1603923813760/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/939424471353475072/fB-3BRin_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Wrath Of Gnon 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@wrathofgnon bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@wrathofgnon's tweets](https://twitter.com/wrathofgnon).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3227</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>499</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>116</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2612</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/x0tn1ht9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wrathofgnon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/39eutbnd) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/39eutbnd/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/wrathofgnon'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/wyattpuppers | ee1ece580e92033b621226b76ea413173bf74d59 | 2021-05-23T04:47:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wyattpuppers | 0 | null | transformers | 35,338 | ---
language: en
thumbnail: https://www.huggingtweets.com/wyattpuppers/1609011422698/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1340068830673027074/VVV2NNgn_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">C a r p 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@wyattpuppers bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@wyattpuppers's tweets](https://twitter.com/wyattpuppers).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3190</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>478</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>481</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2231</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bl7smzv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wyattpuppers's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ld6fkx1k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ld6fkx1k/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/wyattpuppers'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/xaneowski | 688a5fba96709f45bf247d6933b8e067f7504093 | 2021-05-23T04:48:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/xaneowski | 0 | null | transformers | 35,339 | ---
language: en
thumbnail: https://www.huggingtweets.com/xaneowski/1616904322673/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1375865466095239170/D5rWgwpw_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">xane 🤖 AI Bot </div>
<div style="font-size: 15px">@xaneowski bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@xaneowski's tweets](https://twitter.com/xaneowski).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1672 |
| Retweets | 18 |
| Short tweets | 208 |
| Tweets kept | 1446 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d0e2uns/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @xaneowski's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27c0d7fs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27c0d7fs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/xaneowski')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/xiaomi | e99063e5a32045cbfd295e3dd10e9873f351d1b7 | 2021-05-23T04:50:39.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/xiaomi | 0 | null | transformers | 35,340 | ---
language: en
thumbnail: https://www.huggingtweets.com/xiaomi/1609716260950/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1246621358458433536/JlSVXK2__400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Xiaomi 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@xiaomi bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@xiaomi's tweets](https://twitter.com/xiaomi).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3225</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>273</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>99</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2853</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jsfmwvr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @xiaomi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16pogxna) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16pogxna/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/xiaomi'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yarbsalocin | 0e98f2609e350957c04121abd43c1d73f1299b1b | 2021-05-23T04:55:26.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yarbsalocin | 0 | null | transformers | 35,341 | ---
language: en
thumbnail: https://www.huggingtweets.com/yarbsalocin/1616647825805/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/978499874923556866/8vBDa_LU_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Nicolas Bray 🤖 AI Bot </div>
<div style="font-size: 15px">@yarbsalocin bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@yarbsalocin's tweets](https://twitter.com/yarbsalocin).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 135 |
| Short tweets | 190 |
| Tweets kept | 2918 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ps5uhut/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yarbsalocin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tsu7r7s9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tsu7r7s9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yarbsalocin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ycombinator | cbdc78f8c0e57361b60db1c6b602305292c44d29 | 2021-05-23T04:56:29.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ycombinator | 0 | null | transformers | 35,342 | ---
language: en
thumbnail: https://www.huggingtweets.com/ycombinator/1603447091260/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/716641091311697920/hFFVBhFe_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Y Combinator 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@ycombinator bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ycombinator's tweets](https://twitter.com/ycombinator).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3220</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>726</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2480</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/256at4s3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ycombinator's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/31k6k27j) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/31k6k27j/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ycombinator'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/yeahyeahyens | 675491016e80313efdbdb07dcdb61aa2812d3a7d | 2021-05-23T04:58:28.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yeahyeahyens | 0 | null | transformers | 35,343 | ---
language: en
thumbnail: https://www.huggingtweets.com/yeahyeahyens/1612175424382/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/647352395622653952/WEuxh8dK_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">evilyeahyeahyens 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@yeahyeahyens bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@yeahyeahyens's tweets](https://twitter.com/yeahyeahyens).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3244</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>492</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>327</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2425</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3udu67c3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yeahyeahyens's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2lmppjy9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2lmppjy9/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/yeahyeahyens'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yeetgenstein | 0973ab4bb03201e2bb67458ad57b57c51e0e8447 | 2021-12-13T16:31:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yeetgenstein | 0 | null | transformers | 35,344 | ---
language: en
thumbnail: http://www.huggingtweets.com/yeetgenstein/1639413081074/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1198233166399647744/6UjqTBIT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ludwig Yeetgenstein</div>
<div style="text-align: center; font-size: 14px;">@yeetgenstein</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ludwig Yeetgenstein.
| Data | Ludwig Yeetgenstein |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 218 |
| Short tweets | 367 |
| Tweets kept | 2652 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27hq0lqc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yeetgenstein's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1aykas3j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1aykas3j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yeetgenstein')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yennyowo | 9453a2c89417b168304be11dc2c5217396dd9832 | 2021-06-07T11:56:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yennyowo | 0 | null | transformers | 35,345 | ---
language: en
thumbnail: https://www.huggingtweets.com/yennyowo/1623067005926/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1400613553070018571/4_Sit9I4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🏳🌈🌸 Pride Huwu-Yenny 🌸🏳🌈</div>
<div style="text-align: center; font-size: 14px;">@yennyowo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🏳🌈🌸 Pride Huwu-Yenny 🌸🏳🌈.
| Data | 🏳🌈🌸 Pride Huwu-Yenny 🌸🏳🌈 |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 50 |
| Short tweets | 1015 |
| Tweets kept | 2185 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1e63a0zl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yennyowo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rh23jhk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rh23jhk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yennyowo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yieee_nagitaco | 34e874692b7fa68e2ded28be44b54c53c2060029 | 2021-05-23T05:00:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yieee_nagitaco | 0 | null | transformers | 35,346 | ---
language: en
thumbnail: https://www.huggingtweets.com/yieee_nagitaco/1608381956095/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1339194351298072577/tfJheAeM_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Yieee Nagi Taco (Sky) 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@yieee_nagitaco bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@yieee_nagitaco's tweets](https://twitter.com/yieee_nagitaco).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3032</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>789</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>623</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1620</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1g8y06jc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yieee_nagitaco's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1tjubyxk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1tjubyxk/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/yieee_nagitaco'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yierpaen | 6a05560ccb5e762747dbc82dbf0176df91b428fe | 2021-10-29T14:00:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yierpaen | 0 | null | transformers | 35,347 | ---
language: en
thumbnail: https://www.huggingtweets.com/yierpaen/1635516027908/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1428772517347479552/fT9QUaOy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Erpan Pardon</div>
<div style="text-align: center; font-size: 14px;">@yierpaen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Erpan Pardon.
| Data | Erpan Pardon |
| --- | --- |
| Tweets downloaded | 3025 |
| Retweets | 2613 |
| Short tweets | 106 |
| Tweets kept | 306 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jk3rfqi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yierpaen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/y2mm5kxj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/y2mm5kxj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yierpaen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/youcleanitup1 | c55fccec461cbe87d421b4b662b1d22a9b0f3296 | 2021-07-31T04:10:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/youcleanitup1 | 0 | null | transformers | 35,348 | ---
language: en
thumbnail: https://www.huggingtweets.com/youcleanitup1/1627704616740/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1334724928059240448/hrHRlrJF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">YouCleanItUp</div>
<div style="text-align: center; font-size: 14px;">@youcleanitup1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from YouCleanItUp.
| Data | YouCleanItUp |
| --- | --- |
| Tweets downloaded | 3027 |
| Retweets | 131 |
| Short tweets | 111 |
| Tweets kept | 2785 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18s0ux8o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @youcleanitup1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1g9i1qvf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1g9i1qvf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/youcleanitup1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yujachachacha | 857ba79bfeaca46249fa2dedcf811fd6d576b2b4 | 2021-09-14T23:35:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yujachachacha | 0 | null | transformers | 35,349 | ---
language: en
thumbnail: https://www.huggingtweets.com/yujachachacha/1631662527114/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308291810830086144/t4CsThlT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Yujacha차차</div>
<div style="text-align: center; font-size: 14px;">@yujachachacha</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Yujacha차차.
| Data | Yujacha차차 |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 2281 |
| Short tweets | 42 |
| Tweets kept | 918 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/u9lcz1jg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yujachachacha's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nhqqjka) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nhqqjka/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yujachachacha')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yukonbrandon | 444136f370c03ebcc38cd9b11352996ed9516a6f | 2021-05-23T05:06:26.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yukonbrandon | 0 | null | transformers | 35,350 | ---
language: en
thumbnail: https://www.huggingtweets.com/yukonbrandon/1617458018352/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1359285189482934280/-NJ6GZ4M_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Brandon Macdonald 🤖 AI Bot </div>
<div style="font-size: 15px">@yukonbrandon bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@yukonbrandon's tweets](https://twitter.com/yukonbrandon).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 105 |
| Short tweets | 128 |
| Tweets kept | 3016 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36sc6qhf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yukonbrandon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8xctkx0v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8xctkx0v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yukonbrandon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yung_caribou | ce5a698e290b70357d7ec78684cab92a444d2e9e | 2021-05-23T05:07:40.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yung_caribou | 0 | null | transformers | 35,351 | ---
language: en
thumbnail: https://www.huggingtweets.com/yung_caribou/1614136835166/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1358907764374970368/tJHY7eRK_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Certified Hairbrush Licker 🤖 AI Bot </div>
<div style="font-size: 15px">@yung_caribou bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@yung_caribou's tweets](https://twitter.com/yung_caribou).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 772 |
| Retweets | 52 |
| Short tweets | 158 |
| Tweets kept | 562 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1c7fnh7f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yung_caribou's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2palqva5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2palqva5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yung_caribou')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yungparenti | dad89fc79a4764c5383a9181cff6abd60425a334 | 2021-05-23T05:08:47.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yungparenti | 0 | null | transformers | 35,352 | ---
language: en
thumbnail: https://www.huggingtweets.com/yungparenti/1619285093850/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1305960885987532802/6TzwD8_B_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Astronaut Cum Evangelist 🇬🇷 🤖 AI Bot </div>
<div style="font-size: 15px">@yungparenti bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@yungparenti's tweets](https://twitter.com/yungparenti).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3232 |
| Retweets | 211 |
| Short tweets | 507 |
| Tweets kept | 2514 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20pdwcql/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yungparenti's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/oi3awf8d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/oi3awf8d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yungparenti')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yybbhn | bf7dac458144650b88e176c812fc93e921146094 | 2021-05-23T05:11:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yybbhn | 0 | null | transformers | 35,353 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1369145347155591168/eRKzbkIx_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Yiyang Chen 🤖 AI Bot </div>
<div style="font-size: 15px">@yybbhn bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@yybbhn's tweets](https://twitter.com/yybbhn).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 107 |
| Retweets | 25 |
| Short tweets | 9 |
| Tweets kept | 73 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/16g99wak/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yybbhn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1pj7k13c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1pj7k13c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yybbhn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/zacharyhundley | bf5f4f0849f660c2dbf9c25493a79754cbce5eab | 2021-05-23T05:12:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zacharyhundley | 0 | null | transformers | 35,354 | ---
language: en
thumbnail: https://www.huggingtweets.com/zacharyhundley/1616620657215/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1336547084372234241/ZthomfiN_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Zachary 💻🖱️ 🤖 AI Bot </div>
<div style="font-size: 15px">@zacharyhundley bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@zacharyhundley's tweets](https://twitter.com/zacharyhundley).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 260 |
| Short tweets | 797 |
| Tweets kept | 2189 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d5adsoe/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zacharyhundley's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ni6yl621) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ni6yl621/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zacharyhundley')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/zachfox | 4399c19ebbcfbd58cac87d4afd23dbce2d250e45 | 2021-05-23T05:13:41.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zachfox | 0 | null | transformers | 35,355 | ---
language: en
thumbnail: https://www.huggingtweets.com/zachfox/1603321719549/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1142440824766193664/6NK0B-Gr_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Zach Fox 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@zachfox bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@zachfox's tweets](https://twitter.com/zachfox).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3168</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>547</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>415</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2206</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/21svlsaa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zachfox's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/21o6mb7e) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/21o6mb7e/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/zachfox'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/zackfox | d55e04f099e5e71ad7a07e4c129a26693b9a4988 | 2022-06-12T15:42:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zackfox | 0 | null | transformers | 35,356 | ---
language: en
thumbnail: http://www.huggingtweets.com/zackfox/1655048563971/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1512157386441695245/YdLmcC-0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Zack Fox</div>
<div style="text-align: center; font-size: 14px;">@zackfox</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Zack Fox.
| Data | Zack Fox |
| --- | --- |
| Tweets downloaded | 950 |
| Retweets | 143 |
| Short tweets | 303 |
| Tweets kept | 504 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dfumi9u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zackfox's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mpcixky) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mpcixky/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zackfox')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/zavaralho | b47555be25474e90997e346bc8505fbbbf993aa9 | 2021-05-23T05:18:42.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zavaralho | 0 | null | transformers | 35,357 | ---
language: en
thumbnail: https://www.huggingtweets.com/zavaralho/1614104703560/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1347906977087827978/6EI648fF_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Zum Zavaralho 🤖 AI Bot </div>
<div style="font-size: 15px">@zavaralho bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@zavaralho's tweets](https://twitter.com/zavaralho).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3193 |
| Retweets | 241 |
| Short tweets | 1078 |
| Tweets kept | 1874 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7fl4khg3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zavaralho's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18t6cfw2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18t6cfw2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zavaralho')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/zitterbewegung | ddc771a8a9182d11ca5d0b1663abed21989162dc | 2021-10-03T12:51:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zitterbewegung | 0 | null | transformers | 35,358 | ---
language: en
thumbnail: https://www.huggingtweets.com/zitterbewegung/1633265493095/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1033823587071737856/pDlHy2Sh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joshua Herman</div>
<div style="text-align: center; font-size: 14px;">@zitterbewegung</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joshua Herman.
| Data | Joshua Herman |
| --- | --- |
| Tweets downloaded | 3223 |
| Retweets | 1452 |
| Short tweets | 141 |
| Tweets kept | 1630 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rvhbbck/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zitterbewegung's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qc7e3m1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qc7e3m1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zitterbewegung')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/zoebot_zoe | c96e468cfe7efada4767ac40fb4a3ebc3c09f2d0 | 2021-05-23T05:25:33.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zoebot_zoe | 0 | null | transformers | 35,359 | ---
language: en
thumbnail: https://www.huggingtweets.com/zoebot_zoe/1607527750457/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1330687610625220611/RGVkpYG-_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ZOEOZONE 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@zoebot_zoe bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@zoebot_zoe's tweets](https://twitter.com/zoebot_zoe).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3196</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>597</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>410</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2189</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cddeevp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zoebot_zoe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kaybw72) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kaybw72/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/zoebot_zoe'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/zrkrlc | 4866ab003cbaf83ed68bd6886097ee9a99e5c44c | 2021-05-23T05:26:40.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zrkrlc | 0 | null | transformers | 35,360 | ---
language: en
thumbnail: https://www.huggingtweets.com/zrkrlc/1616626091555/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1332547297914175494/RAz44L4J_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">object-level jail 🚨 🤖 AI Bot </div>
<div style="font-size: 15px">@zrkrlc bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@zrkrlc's tweets](https://twitter.com/zrkrlc).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2241 |
| Retweets | 228 |
| Short tweets | 204 |
| Tweets kept | 1809 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2g51am53/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zrkrlc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23unb7ar) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23unb7ar/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zrkrlc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/zssbecker | 2456b8e0bb00624c2aa0f86f0e05daf70ae07881 | 2022-01-31T20:37:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zssbecker | 0 | null | transformers | 35,361 | ---
language: en
thumbnail: http://www.huggingtweets.com/zssbecker/1643661417699/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486464160422670337/mEvWu0_w_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alex Becker 🍊🏆🥇</div>
<div style="text-align: center; font-size: 14px;">@zssbecker</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alex Becker 🍊🏆🥇.
| Data | Alex Becker 🍊🏆🥇 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 53 |
| Short tweets | 453 |
| Tweets kept | 2743 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2j8o1cf7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zssbecker's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qe600k1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qe600k1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zssbecker')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3 | 6f413009c6ce486e1849f905ec9afd8701c7ed6a | 2022-01-15T07:25:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | husnu | null | husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3 | 0 | null | transformers | 35,362 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-cased](https://huggingface.co/dbmdz/bert-base-turkish-128k-cased) on the turkish squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3911 | 1.0 | 1281 | 1.4900 |
| 0.9058 | 2.0 | 2562 | 1.3471 |
| 0.6747 | 3.0 | 3843 | 1.4724 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
husnu/electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-3 | 1b818fd0e21616242a33af9bc403575702b931ca | 2022-01-15T05:24:03.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | husnu | null | husnu/electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-3 | 0 | null | transformers | 35,363 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-3
This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the Turkish squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5305 | 1.0 | 5818 | 2.4024 |
| 2.3264 | 2.0 | 11636 | 2.2298 |
| 2.1762 | 3.0 | 17454 | 2.1669 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
husnu/electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6 | 7c564d840ab499aee6388d3b8200cc8323ecb45f | 2022-01-15T07:27:37.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:tsquad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | husnu | null | husnu/electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6 | 0 | null | transformers | 35,364 | ---
tags:
- generated_from_trainer
datasets:
- tsquad
model-index:
- name: electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6
This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the turkish squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.128 | 1.0 | 722 | 2.7187 |
| 3.0376 | 2.0 | 1444 | 2.4486 |
| 2.5304 | 3.0 | 2166 | 2.3485 |
| 2.4214 | 4.0 | 2888 | 2.0450 |
| 2.1568 | 5.0 | 3610 | 2.0576 |
| 2.0752 | 6.0 | 4332 | 2.0379 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
husnu/xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6 | 521375bd663d0f02b3200deb1392b93dc43d5f90 | 2022-01-14T20:57:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | husnu | null | husnu/xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6 | 0 | null | transformers | 35,365 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3828 | 1.0 | 1845 | 1.7946 |
| 1.5827 | 2.0 | 3690 | 1.4123 |
| 1.404 | 3.0 | 5535 | 1.3142 |
| 1.346 | 4.0 | 7380 | 1.2819 |
| 1.2871 | 5.0 | 9225 | 1.2630 |
| 1.2538 | 6.0 | 11070 | 1.2578 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huyue012/wav2vec2-base-cynthia-tedlium-2500 | fd4093e8b3dbbad356dbc415e6b7cd309bed382f | 2021-11-17T22:08:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | huyue012 | null | huyue012/wav2vec2-base-cynthia-tedlium-2500 | 0 | null | transformers | 35,366 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-cynthia-tedlium-2500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-tedlium-2500
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4356
- Wer: 0.1796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.2811 | 6.58 | 500 | 3.1455 | 0.9934 |
| 3.0533 | 13.16 | 1000 | 2.9568 | 0.9934 |
| 1.8269 | 19.73 | 1500 | 0.7595 | 0.2484 |
| 0.5103 | 26.31 | 2000 | 0.4356 | 0.1796 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
huyue012/wav2vec2-base-cynthia-tedlium | 97ac7c91445db73f025a782fc929f6ec2373f574 | 2021-11-15T05:20:50.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | huyue012 | null | huyue012/wav2vec2-base-cynthia-tedlium | 0 | null | transformers | 35,367 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-cynthia-tedlium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-tedlium
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3
|
huyue012/wav2vec2-base-cynthia-timit | eb990c51468f0f6886b3370c4649e600687b9ac3 | 2021-11-15T17:29:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | huyue012 | null | huyue012/wav2vec2-base-cynthia-timit | 0 | null | transformers | 35,368 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-cynthia-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-timit
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Wer: 0.3315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.7674 | 1.0 | 500 | 2.8994 | 1.0 |
| 1.3538 | 2.01 | 1000 | 0.5623 | 0.5630 |
| 0.5416 | 3.01 | 1500 | 0.4595 | 0.4765 |
| 0.3563 | 4.02 | 2000 | 0.4435 | 0.4328 |
| 0.2869 | 5.02 | 2500 | 0.4035 | 0.4145 |
| 0.2536 | 6.02 | 3000 | 0.4090 | 0.3945 |
| 0.2072 | 7.03 | 3500 | 0.4188 | 0.3809 |
| 0.1825 | 8.03 | 4000 | 0.4139 | 0.3865 |
| 0.1754 | 9.04 | 4500 | 0.4320 | 0.3763 |
| 0.1477 | 10.04 | 5000 | 0.4668 | 0.3699 |
| 0.1418 | 11.04 | 5500 | 0.4439 | 0.3683 |
| 0.1207 | 12.05 | 6000 | 0.4419 | 0.3678 |
| 0.115 | 13.05 | 6500 | 0.4606 | 0.3786 |
| 0.1022 | 14.06 | 7000 | 0.4403 | 0.3610 |
| 0.1019 | 15.06 | 7500 | 0.4966 | 0.3609 |
| 0.0898 | 16.06 | 8000 | 0.4675 | 0.3586 |
| 0.0824 | 17.07 | 8500 | 0.4844 | 0.3583 |
| 0.0737 | 18.07 | 9000 | 0.4801 | 0.3534 |
| 0.076 | 19.08 | 9500 | 0.4945 | 0.3529 |
| 0.0627 | 20.08 | 10000 | 0.4700 | 0.3417 |
| 0.0723 | 21.08 | 10500 | 0.4630 | 0.3449 |
| 0.0597 | 22.09 | 11000 | 0.5164 | 0.3456 |
| 0.0566 | 23.09 | 11500 | 0.4957 | 0.3401 |
| 0.0453 | 24.1 | 12000 | 0.5032 | 0.3419 |
| 0.0492 | 25.1 | 12500 | 0.5391 | 0.3387 |
| 0.0524 | 26.1 | 13000 | 0.5057 | 0.3348 |
| 0.0381 | 27.11 | 13500 | 0.5098 | 0.3331 |
| 0.0402 | 28.11 | 14000 | 0.5087 | 0.3353 |
| 0.0358 | 29.12 | 14500 | 0.4888 | 0.3315 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
hyesunyun/NonsenseUpdateDiffStringBart | 5af99e27df92a0b5cd79897453ca41d1700e59a9 | 2022-02-08T04:10:12.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:nonsense corpus",
"transformers",
"summarization",
"diff generation",
"autotrain_compatible"
] | summarization | false | hyesunyun | null | hyesunyun/NonsenseUpdateDiffStringBart | 0 | null | transformers | 35,369 | ---
language:
- en
tags:
- summarization
- diff generation
datasets:
- nonsense corpus
metrics:
- rouge
---
hello! this is the pretrained BART. The dataset used for pretraining is nonsense summary corpus with output as diff.
|
hyunwoongko/asian-bart-en | a8366271065227d46709e347fcfd87c6a5578858 | 2021-04-01T07:47:06.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hyunwoongko | null | hyunwoongko/asian-bart-en | 0 | null | transformers | 35,370 | Entry not found |
iamholmes/cuad_test | 58fe136f2bb9c929dc7bfacd549f963f6e0e5e48 | 2021-12-15T12:17:43.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | iamholmes | null | iamholmes/cuad_test | 0 | null | transformers | 35,371 | Entry not found |
iarfmoose/wav2vec2-large-xlsr-frisian | 5255325913f56a9f0e451e879d55bb4a40cde9e7 | 2021-07-06T05:51:20.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fy-NL",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | iarfmoose | null | iarfmoose/wav2vec2-large-xlsr-frisian | 0 | null | transformers | 35,372 | ---
language: fy-NL
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Frisian by Adam Montgomerie
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fy-NL
type: common_voice
args: fy-NL
metrics:
- name: Test WER
type: wer
value: 21.72
---
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
tbatch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fy-NL", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\–\\—\\¬\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 21.72 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Frisian/XLSR_Frisian.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Frisian/wav2vec2_fyNL_eval.ipynb) |
ielab/TILDEv2-noExp | ad7d90c825533412f8e665226f52218157541585 | 2022-02-07T01:17:56.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | ielab | null | ielab/TILDEv2-noExp | 0 | null | transformers | 35,373 | Entry not found |
ietz/bert-base-uncased-finetuned-jira-qt-issue-titles-and-bodies | 2c340450dedfaeee6bdec6e82e78f441b54c0d49 | 2022-02-04T02:36:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ietz | null | ietz/bert-base-uncased-finetuned-jira-qt-issue-titles-and-bodies | 0 | null | transformers | 35,374 | Entry not found |
ietz/distilroberta-base-finetuned-jira-qt-issue-title | ee0964b384d6195ee9cf6dad66e2cc6c443aec55 | 2022-01-07T12:27:11.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"transformers",
"jira",
"code",
"issue",
"development",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | ietz | null | ietz/distilroberta-base-finetuned-jira-qt-issue-title | 0 | null | transformers | 35,375 | ---
language:
- en
tags:
- jira
- code
- issue
- development
license: mit
---
`distilroberta-base` finetuned for masked language modeling on 126213 Qt jira issue titles for up to 50 epochs. |
ifis-zork/ZORK_AI_FANTASY | 18c1a077a0e855ec3f6dac4a0f9984c80434f185 | 2021-07-21T09:50:17.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | false | ifis-zork | null | ifis-zork/ZORK_AI_FANTASY | 0 | null | transformers | 35,376 | ---
tags:
- generated_from_trainer
model_index:
- name: ZORK_AI_FANTASY
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_FANTASY
This model is a fine-tuned version of [ifis-zork/ZORK_AI_FAN_TEMP](https://huggingface.co/ifis-zork/ZORK_AI_FAN_TEMP) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
ifis-zork/ZORK_AI_FAN_TEMP | cd1c162876276ce65826b64b35a89d08002803d3 | 2021-07-20T19:58:38.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | false | ifis-zork | null | ifis-zork/ZORK_AI_FAN_TEMP | 0 | null | transformers | 35,377 | ---
tags:
- generated_from_trainer
model_index:
- name: ZORK_AI_FAN_TEMP
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_FAN_TEMP
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
ifis-zork/ZORK_AI_MODERN | 998d5def10991218bd7a2d2f755f2ba17868164b | 2021-07-20T20:47:22.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | false | ifis-zork | null | ifis-zork/ZORK_AI_MODERN | 0 | null | transformers | 35,378 | ---
tags:
- generated_from_trainer
model_index:
- name: ZORK_AI_MODERN
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_MODERN
This model is a fine-tuned version of [ifis-zork/ZORK_AI_MODERN_A](https://huggingface.co/ifis-zork/ZORK_AI_MODERN_A) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
ifis-zork/ZORK_AI_SCI_FI | bfbf0c6e6fa4cfab4a39d5cd62f500c483a9c7ca | 2021-07-21T14:18:15.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | false | ifis-zork | null | ifis-zork/ZORK_AI_SCI_FI | 0 | null | transformers | 35,379 | ---
tags:
- generated_from_trainer
model_index:
- name: ZORK_AI_SCI_FI
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_SCI_FI
This model is a fine-tuned version of [ifis-zork/ZORK_AI_SCI_FI_TEMP](https://huggingface.co/ifis-zork/ZORK_AI_SCI_FI_TEMP) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
ilevs/opus-mt-en-ru-finetuned-en-to-ru | 70ebd08e3f6eb9bc8d86b5f91d45e9acd6d61a09 | 2022-01-20T18:18:30.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ilevs | null | ilevs/opus-mt-en-ru-finetuned-en-to-ru | 0 | null | transformers | 35,380 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ru-finetuned-en-to-ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned-en-to-ru
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7682
- Bleu: 14.6112
- Gen Len: 7.202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.3198 | 1.0 | 4956 | 2.1261 | 9.5339 | 6.7709 |
| 1.9732 | 2.0 | 9912 | 1.9639 | 10.4715 | 7.1254 |
| 1.7127 | 3.0 | 14868 | 1.8780 | 11.6128 | 7.1106 |
| 1.5614 | 4.0 | 19824 | 1.8367 | 12.8389 | 7.0468 |
| 1.4276 | 5.0 | 24780 | 1.8040 | 13.7423 | 7.0403 |
| 1.3096 | 6.0 | 29736 | 1.7820 | 14.1469 | 7.0555 |
| 1.2381 | 7.0 | 34692 | 1.7761 | 13.9987 | 7.2225 |
| 1.1784 | 8.0 | 39648 | 1.7725 | 14.4675 | 7.1799 |
| 1.1376 | 9.0 | 44604 | 1.7692 | 14.4937 | 7.1957 |
| 1.0862 | 10.0 | 49560 | 1.7682 | 14.6112 | 7.202 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ilikeapple12/DialoGPT-small-Phos | 74be19b987f43eb465b590f6ffcf516da206dbb8 | 2022-01-08T23:48:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ilikeapple12 | null | ilikeapple12/DialoGPT-small-Phos | 0 | null | transformers | 35,381 | ---
tags:
- conversational
---
#DialoGPT Model |
illvmi/mymodel04 | b71a987fa85343f8ca20ddb333b86fd90d361d1e | 2022-02-10T12:41:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | illvmi | null | illvmi/mymodel04 | 0 | null | transformers | 35,382 | 我要年薪百万,哈哈
|
indridinn/IceBERT-finetuned-ner | 5451c78cf12fcebc51f5d3a2086661fade8d873e | 2021-10-02T02:18:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | indridinn | null | indridinn/IceBERT-finetuned-ner | 0 | null | transformers | 35,383 | ---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8918518518518519
- name: Recall
type: recall
value: 0.8631855657784682
- name: F1
type: f1
value: 0.8772845953002611
- name: Accuracy
type: accuracy
value: 0.9851436434474428
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0830
- Precision: 0.8919
- Recall: 0.8632
- F1: 0.8773
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0533 | 1.0 | 2904 | 0.0777 | 0.8773 | 0.8527 | 0.8648 | 0.9834 |
| 0.0271 | 2.0 | 5808 | 0.0794 | 0.8740 | 0.8537 | 0.8638 | 0.9835 |
| 0.0165 | 3.0 | 8712 | 0.0830 | 0.8919 | 0.8632 | 0.8773 | 0.9851 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
infinex/translation_model | 4566d3fd56a02e00fa1b09f6be7e6887656fdc77 | 2021-05-04T09:06:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | infinex | null | infinex/translation_model | 0 | null | transformers | 35,384 | Entry not found |
infinitejoy/Wav2Vec2-Large-XLSR-53-Odia | 7a56c2328eaf8f30a2c16aea2b452024a79e498c | 2022-01-21T13:19:09.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/Wav2Vec2-Large-XLSR-53-Odia | 0 | null | transformers | 35,385 | ---
language: or
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Odia
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice as
type: common_voice
args: or
metrics:
- name: Test WER
type: wer
value: 55.07
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Odia using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\।\–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 55.07 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil | 76fc3a96cfd11aa319fed7cff08e78f48f7d39f0 | 2021-07-06T06:33:19.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil | 0 | null | transformers | 35,386 | ---
language: ta
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Tamil
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 71.29
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 71.29 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 | c142f6ba9fa1230c13ea5e8ae0ef93d735da8d02 | 2022-03-24T11:58:01.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"br",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 | 0 | null | transformers | 35,387 | ---
language:
- br
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- br
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Breton
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: br
metrics:
- name: Test WER
type: wer
value: 54.855
- name: Test CER
type: cer
value: 17.865
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Breton
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: NA
- Wer: NA
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
### Training results
NA
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 --dataset mozilla-foundation/common_voice_8_0 --config br --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 --dataset speech-recognition-community-v2/dev_data --config br --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "br", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Eval results on Common Voice 7 "test" (WER):
NA
|
infinitejoy/wav2vec2-large-xls-r-300m-maltese | c6b2503d0e3f0bdadc11317d85ec15cba737e107 | 2022-03-24T11:53:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mt",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-maltese | 0 | null | transformers | 35,388 | ---
language:
- mt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- mt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Maltese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: mt
metrics:
- name: Test WER
type: wer
value: 23.503
- name: Test CER
type: cer
value: 5.065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-maltese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2005
- Wer: 0.1897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.2238 | 18.02 | 2000 | 0.3911 | 0.4310 |
| 0.7871 | 36.04 | 4000 | 0.2063 | 0.2309 |
| 0.6653 | 54.05 | 6000 | 0.1960 | 0.2091 |
| 0.5861 | 72.07 | 8000 | 0.1986 | 0.2000 |
| 0.5283 | 90.09 | 10000 | 0.1993 | 0.1909 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-odia-cv8 | a7074fdd4b97d20759e6b99281650e34c3a194c5 | 2022-02-05T18:24:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-odia-cv8 | 0 | null | transformers | 35,389 | ---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-odia-cv8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-odia-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8176
- Wer: 0.5818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3957 | 20.83 | 500 | 1.0925 | 0.8111 |
| 1.0351 | 41.67 | 1000 | 0.7837 | 0.6574 |
| 0.7396 | 62.5 | 1500 | 0.7674 | 0.6083 |
| 0.5385 | 83.33 | 2000 | 0.8015 | 0.5812 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-romansh-vallader | 26c7dcfb01f5f737806e08597ffe64c813fb08ea | 2022-03-24T11:58:11.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"rm-vallader",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-romansh-vallader | 0 | null | transformers | 35,390 | ---
language:
- rm-vallader
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- rm-vallader
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Romansh Vallader
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: rm-vallader
metrics:
- name: Test WER
type: wer
value: 31.689
- name: Test CER
type: cer
value: 7.202
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-romansh-vallader
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RM-VALLADER dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3155
- Wer: 0.3162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9556 | 15.62 | 500 | 2.9300 | 1.0 |
| 1.7874 | 31.25 | 1000 | 0.7566 | 0.6509 |
| 1.0131 | 46.88 | 1500 | 0.3671 | 0.3828 |
| 0.8439 | 62.5 | 2000 | 0.3350 | 0.3416 |
| 0.7502 | 78.12 | 2500 | 0.3155 | 0.3296 |
| 0.7093 | 93.75 | 3000 | 0.3182 | 0.3186 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-turkish-colab | 72651076702ffb7a813916989f720d0ee436e5d3 | 2022-01-18T03:49:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-turkish-colab | 0 | null | transformers | 35,391 | Entry not found |
infominer/bertweet-ner | 70ac06b773e104c689b3d08986b187b4f684d970 | 2021-05-29T11:14:44.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | infominer | null | infominer/bertweet-ner | 0 | null | transformers | 35,392 | Entry not found |
ingridnc/t5-small-finetuned-fi-to-en | bc78fc47263f454e0b33e38c66716649aada5ebe | 2021-12-06T01:11:33.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt19",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ingridnc | null | ingridnc/t5-small-finetuned-fi-to-en | 0 | null | transformers | 35,393 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt19
metrics:
- bleu
model-index:
- name: t5-small-finetuned-fi-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt19
type: wmt19
args: fi-en
metrics:
- name: Bleu
type: bleu
value: 1.618
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-fi-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3598
- Bleu: 1.618
- Gen Len: 17.3223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.3627 | 1.0 | 6250 | 3.5122 | 1.2882 | 17.1803 |
| 3.2162 | 2.0 | 12500 | 3.4442 | 1.4329 | 17.2617 |
| 3.1304 | 3.0 | 18750 | 3.3872 | 1.4862 | 17.296 |
| 3.0832 | 4.0 | 25000 | 3.3648 | 1.5795 | 17.3047 |
| 3.0623 | 5.0 | 31250 | 3.3598 | 1.618 | 17.3223 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ironman123/DialoGPT-small-harrypotter | 66f4e64260d6cb1a1342747753689c1d6780020e | 2021-08-28T16:21:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ironman123 | null | ironman123/DialoGPT-small-harrypotter | 0 | null | transformers | 35,394 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
it5/it5-small-question-answering | 6acbb88d78c77ebeceb0fbfc0453ddcff1814296 | 2022-03-09T07:58:17.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:squad_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"squad_it",
"text2text-question-answering",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/it5-small-question-answering | 0 | 1 | transformers | 35,395 | ---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- italian
- sequence-to-sequence
- squad_it
- text2text-question-answering
- text2text-generation
widget:
- text: "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?"
- text: "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?"
- text: "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?"
- text: "La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?"
metrics:
- f1
- exact-match
model-index:
- name: it5-small-question-answering
results:
- task:
type: question-answering
name: "Question Answering"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: f1
value: 0.716
name: "Test F1"
- type: exact-match
value: 0.619
name: "Test Exact Match"
co2_eq_emissions:
emissions: "8g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Small for Question Answering ⁉️ 🇮🇹
This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qa = pipeline("text2text-generation", model='it5/it5-small-question-answering')
qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?")
>>> [{"generated_text": "ultimo massimo glaciale"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-question-answering")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-question-answering")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
it5/mt5-base-repubblica-to-ilgiornale | d397c2c10f804fdceea2e5f4ca20945418bbd2b4 | 2022-03-09T08:00:57.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"newspaper",
"ilgiornale",
"repubblica",
"style-transfer",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/mt5-base-repubblica-to-ilgiornale | 0 | null | transformers | 35,396 | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- ilgiornale
- repubblica
- style-transfer
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
- headline-headline-consistency-classifier
- headline-article-consistency-classifier
model-index:
- name: mt5-base-repubblica-to-ilgiornale
results:
- task:
type: headline-style-transfer-repubblica-to-ilgiornale
name: "Headline style transfer (Repubblica to Il Giornale)"
dataset:
type: gsarti/change_it
name: "CHANGE-IT"
metrics:
- type: rouge1
value: 0.256
name: "Test Rouge1"
- type: rouge2
value: 0.080
name: "Test Rouge2"
- type: rougeL
value: 0.223
name: "Test RougeL"
- type: bertscore
value: 0.384
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
- type: headline-headline-consistency-classifier
value: 0.775
name: "Test Headline-Headline Consistency Accuracy"
- type: headline-article-consistency-classifier
value: 0.825
name: "Test Headline-Article Consistency Accuracy"
co2_eq_emissions:
emissions: "40g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# mT5 Small for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on news headline style transfer in the Repubblica to Il Giornale direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate an headline in the style of Il Giornale from the full body of an article written in the style of Repubblica. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
r2g = pipeline("text2text-generation", model='it5/mt5-small-repubblica-to-ilgiornale')
r2g("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-repubblica-to-ilgiornale")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-repubblica-to-ilgiornale")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
izzy-lazerson/wav2vec2-large-xls-r-300m-turkish-colab | 292c2f9d0c197c89b3e28742d7465d96eec648fa | 2022-02-23T19:31:58.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | izzy-lazerson | null | izzy-lazerson/wav2vec2-large-xls-r-300m-turkish-colab | 0 | null | transformers | 35,397 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3866
- Wer: 0.3363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9949 | 3.67 | 400 | 0.7055 | 0.6984 |
| 0.4192 | 7.34 | 800 | 0.4530 | 0.4711 |
| 0.1987 | 11.01 | 1200 | 0.4319 | 0.4384 |
| 0.1317 | 14.68 | 1600 | 0.4332 | 0.4179 |
| 0.0988 | 18.35 | 2000 | 0.4201 | 0.3755 |
| 0.0791 | 22.02 | 2400 | 0.3968 | 0.3723 |
| 0.0628 | 25.69 | 2800 | 0.3998 | 0.3477 |
| 0.0501 | 29.36 | 3200 | 0.3866 | 0.3363 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jack-oh/korbert_morp_korquad | 352ea4cac9a6be91933244b24ed6cb216944ba9d | 2021-05-25T08:06:32.000Z | [
"pytorch",
"transformers"
] | null | false | jack-oh | null | jack-oh/korbert_morp_korquad | 0 | null | transformers | 35,398 | Entry not found |
jadegao/askinvesto-distilbert-model | 9dbc1067e256f2bab87f0087d8eda5739886384f | 2021-10-31T06:25:27.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | jadegao | null | jadegao/askinvesto-distilbert-model | 0 | null | transformers | 35,399 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.