sha
null | last_modified
null | library_name
stringclasses 154
values | text
stringlengths 1
900k
| metadata
stringlengths 2
348k
| pipeline_tag
stringclasses 45
values | id
stringlengths 5
122
| tags
listlengths 1
1.84k
| created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
| tokens_length
listlengths 1
723
| input_texts
listlengths 1
61
| embeddings
listlengths 768
768
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
transformers
|
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1339300525007835137/YpAMPovA_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Micky The Weirdo from Taranto 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@furrymicky bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@furrymicky's tweets](https://twitter.com/furrymicky).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>459</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>91</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>354</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/109l35nl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @furrymicky's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/q3uw2fui) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/q3uw2fui/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/furrymicky'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/furrymicky
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Micky The Weirdo from Taranto AI Bot </div>
<div style="font-size: 15px; color: #657786">@furrymicky bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @furrymicky's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>459</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>91</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>354</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @furrymicky's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/furrymicky'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ansq@漫画読みすぎ 🤖 AI Bot </div>
<div style="font-size: 15px">@fuurawa bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@fuurawa's tweets](https://twitter.com/fuurawa).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1867 |
| Retweets | 1276 |
| Short tweets | 102 |
| Tweets kept | 489 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2q0sdp5o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fuurawa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24t10y8h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24t10y8h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fuurawa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/fuurawa/1616936220610/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/fuurawa
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
ansq@漫画読みすぎ AI Bot
@fuurawa bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @fuurawa's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @fuurawa's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gabriel Boric Font</div>
<div style="text-align: center; font-size: 14px;">@gabrielboric</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gabriel Boric Font.
| Data | Gabriel Boric Font |
| --- | --- |
| Tweets downloaded | 3166 |
| Retweets | 1575 |
| Short tweets | 261 |
| Tweets kept | 1330 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/sgtq44wg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gabrielboric's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wl4b6qky) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wl4b6qky/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gabrielboric')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gabrielboric/1628117067958/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gabrielboric
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Gabriel Boric Font
@gabrielboric
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Gabriel Boric Font.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gabrielboric's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1312899140615979008/ulnJKPCT_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ZOZANZI ♤☆♤ VIRAGO 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@gadgetgreen bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gadgetgreen's tweets](https://twitter.com/gadgetgreen).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3189</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1537</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>215</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1437</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1f29q7ag/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gadgetgreen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/1df6ql9u) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/1df6ql9u/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gadgetgreen'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gadgetgreen/1602201219260/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gadgetgreen
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ZOZANZI VIRAGO AI Bot </div>
<div style="font-size: 15px; color: #657786">@gadgetgreen bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @gadgetgreen's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3189</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1537</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>215</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1437</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @gadgetgreen's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gadgetgreen'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">GⒶge H Leibman 🤖 AI Bot </div>
<div style="font-size: 15px">@gagehleibman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gagehleibman's tweets](https://twitter.com/gagehleibman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3117 |
| Retweets | 600 |
| Short tweets | 486 |
| Tweets kept | 2031 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vjxnqnf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gagehleibman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/67jfcjhk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/67jfcjhk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gagehleibman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gagehleibman/1616696622775/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gagehleibman
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GⒶge H Leibman AI Bot
@gagehleibman bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gagehleibman's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gagehleibman's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1306714515094921217/cH_rXwuk_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gail Simone RED HEADED WOMAN NOT BEAR 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@gailsimone bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gailsimone's tweets](https://twitter.com/gailsimone).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3205</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1400</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>322</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1483</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1u34kgh5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gailsimone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3krfygi5) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3krfygi5/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gailsimone'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gailsimone/1601276450894/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gailsimone
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gail Simone RED HEADED WOMAN NOT BEAR AI Bot </div>
<div style="font-size: 15px; color: #657786">@gailsimone bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @gailsimone's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3205</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1400</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>322</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1483</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @gailsimone's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gailsimone'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276957687507496962/zy4w13io_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gal Shapira 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@galjudo bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@galjudo's tweets](https://twitter.com/galjudo).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3211</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>420</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>653</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2138</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1iczn33x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @galjudo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/14zzhtt9) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/14zzhtt9/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/galjudo'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/galjudo/1602233220657/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/galjudo
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gal Shapira AI Bot </div>
<div style="font-size: 15px; color: #657786">@galjudo bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @galjudo's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3211</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>420</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>653</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2138</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @galjudo's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/galjudo'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">gãmbs</div>
<div style="text-align: center; font-size: 14px;">@gambsvns</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from gãmbs.
| Data | gãmbs |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 86 |
| Short tweets | 308 |
| Tweets kept | 2852 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wahjzcj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gambsvns's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1td3tcaf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1td3tcaf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gambsvns')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gambsvns/1626385842515/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gambsvns
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
gãmbs
@gambsvns
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from gãmbs.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gambsvns's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Repulse | Iragon is on Kickstarter!</div>
<div style="text-align: center; font-size: 14px;">@gamerepulse</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Repulse | Iragon is on Kickstarter!.
| Data | Repulse | Iragon is on Kickstarter! |
| --- | --- |
| Tweets downloaded | 510 |
| Retweets | 166 |
| Short tweets | 23 |
| Tweets kept | 321 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dqejmdb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gamerepulse's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/czq1aton) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/czq1aton/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gamerepulse')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gamerepulse/1637857655050/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gamerepulse
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> AI BOT </div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Repulse | Iragon is on Kickstarter!</div>
<div style="text-align: center; font-size: 14px;">@gamerepulse</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on tweets from Repulse | Iragon is on Kickstarter!.
| Data | Repulse | Iragon is on Kickstarter! |
| --- | --- |
| Tweets downloaded | 510 |
| Retweets | 166 |
| Short tweets | 23 |
| Tweets kept | 321 |
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @gamerepulse's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
## Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gandalf the White (Thulêan Perspective)</div>
<div style="text-align: center; font-size: 14px;">@gandalfthewhi19</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gandalf the White (Thulêan Perspective).
| Data | Gandalf the White (Thulêan Perspective) |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 431 |
| Short tweets | 225 |
| Tweets kept | 2588 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1r47j719/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gandalfthewhi19's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/u6nhe6ef) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/u6nhe6ef/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gandalfthewhi19')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/gandalfthewhi19/1645099160912/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gandalfthewhi19
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Gandalf the White (Thulêan Perspective)
@gandalfthewhi19
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Gandalf the White (Thulêan Perspective).
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gandalfthewhi19's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gary Short</div>
<div style="text-align: center; font-size: 14px;">@garyshort</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gary Short.
| Data | Gary Short |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 94 |
| Short tweets | 321 |
| Tweets kept | 2833 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2vtmlhlj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @garyshort's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pfbf1ys) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pfbf1ys/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/garyshort')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/garyshort/1647971079915/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/garyshort
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Gary Short
@garyshort
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Gary Short.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @garyshort's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gaston Gordillo 🤖 AI Bot </div>
<div style="font-size: 15px">@gaston_gordillo bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gaston_gordillo's tweets](https://twitter.com/gaston_gordillo).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 705 |
| Retweets | 524 |
| Short tweets | 5 |
| Tweets kept | 176 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kme4rls/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gaston_gordillo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6zu3yfw0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6zu3yfw0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gaston_gordillo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gaston_gordillo/1617249460228/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gaston_gordillo
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gaston Gordillo AI Bot
@gaston\_gordillo bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gaston\_gordillo's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gaston\_gordillo's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">gatcha 🤖 AI Bot </div>
<div style="font-size: 15px">@gatchabot bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gatchabot's tweets](https://twitter.com/gatchabot).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2200 |
| Retweets | 1728 |
| Short tweets | 121 |
| Tweets kept | 351 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qhi9616/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gatchabot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o3eonr9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o3eonr9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gatchabot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gatchabot
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gatcha AI Bot
@gatchabot bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gatchabot's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gatchabot's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ᕙ(‾̀◡‾́)ᕗ g</div>
<div style="text-align: center; font-size: 14px;">@gaucheian</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ᕙ(‾̀◡‾́)ᕗ g.
| Data | ᕙ(‾̀◡‾́)ᕗ g |
| --- | --- |
| Tweets downloaded | 2213 |
| Retweets | 92 |
| Short tweets | 279 |
| Tweets kept | 1842 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/x1bx2fez/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gaucheian's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/0i3i22al) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/0i3i22al/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gaucheian')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gaucheian
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
ᕙ(‾̀◡‾́)ᕗ g
@gaucheian
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from ᕙ(‾̀◡‾́)ᕗ g.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gaucheian's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gavi Begtrup</div>
<div style="text-align: center; font-size: 14px;">@gavibegtrup</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gavi Begtrup.
| Data | Gavi Begtrup |
| --- | --- |
| Tweets downloaded | 990 |
| Retweets | 67 |
| Short tweets | 49 |
| Tweets kept | 874 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kx48u2r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gavibegtrup's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1n9nuiku) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1n9nuiku/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gavibegtrup')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gavibegtrup/1622127344791/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gavibegtrup
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Gavi Begtrup
@gavibegtrup
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Gavi Begtrup.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gavibegtrup's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">sam 🤖 AI Bot </div>
<div style="font-size: 15px">@gayandonline bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gayandonline's tweets](https://twitter.com/gayandonline).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3002 |
| Retweets | 290 |
| Short tweets | 293 |
| Tweets kept | 2419 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3963etnb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gayandonline's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/146uc4xj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/146uc4xj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gayandonline')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gayandonline/1617808083660/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gayandonline
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
sam AI Bot
@gayandonline bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gayandonline's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gayandonline's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">alphonse⛓️🦇🌹 🤖 AI Bot </div>
<div style="font-size: 15px">@gaybats1999 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gaybats1999's tweets](https://twitter.com/gaybats1999).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2783 |
| Retweets | 999 |
| Short tweets | 225 |
| Tweets kept | 1559 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39y8clnw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gaybats1999's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mzsqlq3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mzsqlq3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gaybats1999')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gaybats1999/1614135497450/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gaybats1999
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
alphonse️ AI Bot
@gaybats1999 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gaybats1999's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gaybats1999's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">lisa 🏳️⚧️ 🤖 AI Bot </div>
<div style="font-size: 15px">@gaydeerinc bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gaydeerinc's tweets](https://twitter.com/gaydeerinc).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3214 |
| Retweets | 1108 |
| Short tweets | 310 |
| Tweets kept | 1796 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nsi7oic/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gaydeerinc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gqx2ecq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gqx2ecq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gaydeerinc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gaydeerinc/1614165768951/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gaydeerinc
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
lisa ️️ AI Bot
@gaydeerinc bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gaydeerinc's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gaydeerinc's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">rokos basilisk construction advocate 🤖 AI Bot </div>
<div style="font-size: 15px">@gayguynewsnet bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gayguynewsnet's tweets](https://twitter.com/gayguynewsnet).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 592 |
| Retweets | 146 |
| Short tweets | 64 |
| Tweets kept | 382 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2yoxivok/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gayguynewsnet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dalf0je) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dalf0je/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gayguynewsnet')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gayguynewsnet/1618199553249/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gayguynewsnet
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
rokos basilisk construction advocate AI Bot
@gayguynewsnet bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gayguynewsnet's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gayguynewsnet's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">value bastard 🤖 AI Bot </div>
<div style="font-size: 15px">@gaypizzaboy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gaypizzaboy's tweets](https://twitter.com/gaypizzaboy).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3138 |
| Retweets | 1548 |
| Short tweets | 147 |
| Tweets kept | 1443 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vxwbfva/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gaypizzaboy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t6winba) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t6winba/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gaypizzaboy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gaypizzaboy/1614169105934/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gaypizzaboy
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
value bastard AI Bot
@gaypizzaboy bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gaypizzaboy's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gaypizzaboy's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">العلجوم</div>
<div style="text-align: center; font-size: 14px;">@gaytoad2</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from العلجوم.
| Data | العلجوم |
| --- | --- |
| Tweets downloaded | 3232 |
| Retweets | 379 |
| Short tweets | 1023 |
| Tweets kept | 1830 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2w8lap6f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gaytoad2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34u34diu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34u34diu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gaytoad2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gaytoad2/1629434767014/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gaytoad2
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
العلجوم
@gaytoad2
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from العلجوم.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gaytoad2's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gender Critical Argument Bot 🤖 AI Bot </div>
<div style="font-size: 15px">@gcargumentbot bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gcargumentbot's tweets](https://twitter.com/gcargumentbot).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 1 |
| Short tweets | 223 |
| Tweets kept | 3026 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18f76f7w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gcargumentbot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2lzgykty) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2lzgykty/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gcargumentbot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gcargumentbot/1616766934700/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gcargumentbot
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gender Critical Argument Bot AI Bot
@gcargumentbot bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gcargumentbot's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gcargumentbot's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">craz 🗿 🤖 AI Bot </div>
<div style="font-size: 15px">@geckogirl0 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@geckogirl0's tweets](https://twitter.com/geckogirl0).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3132 |
| Retweets | 1313 |
| Short tweets | 225 |
| Tweets kept | 1594 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qub6zq7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @geckogirl0's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wc0a99s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wc0a99s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/geckogirl0')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/geckogirl0/1617784269558/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/geckogirl0
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
craz AI Bot
@geckogirl0 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @geckogirl0's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @geckogirl0's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">100 gecs hater 🤖 AI Bot </div>
<div style="font-size: 15px">@gecshater bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gecshater's tweets](https://twitter.com/gecshater).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 67 |
| Short tweets | 550 |
| Tweets kept | 2621 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1zp0k65t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gecshater's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13yufu4u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13yufu4u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gecshater')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gecshater/1617797159320/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gecshater
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
100 gecs hater AI Bot
@gecshater bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gecshater's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gecshater's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/517689241812627456/pyBGyEo__400x400.jpeg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lutz Büch 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@geilehirnbude bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@geilehirnbude's tweets](https://twitter.com/geilehirnbude).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3116</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>2906</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>53</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>157</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/wt8lffrr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @geilehirnbude's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/1u8augcw) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/1u8augcw/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/geilehirnbude'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/geilehirnbude
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lutz Büch AI Bot </div>
<div style="font-size: 15px; color: #657786">@geilehirnbude bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @geilehirnbude's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3116</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>2906</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>53</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>157</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @geilehirnbude's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/geilehirnbude'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">GEEGA ギガ 🔝</div>
<div style="text-align: center; font-size: 14px;">@generalgeega</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from GEEGA ギガ 🔝.
| Data | GEEGA ギガ 🔝 |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 127 |
| Short tweets | 1477 |
| Tweets kept | 1646 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2owkgdxf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @generalgeega's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21lavo70) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21lavo70/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/generalgeega')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/generalgeega/1624741487901/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/generalgeega
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
GEEGA ギガ
@generalgeega
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from GEEGA ギガ .
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @generalgeega's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Genji 🤖 AI Bot </div>
<div style="font-size: 15px">@genjitoday bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@genjitoday's tweets](https://twitter.com/genjitoday).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 515 |
| Retweets | 30 |
| Short tweets | 72 |
| Tweets kept | 413 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2t88j5a6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @genjitoday's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1uhl7b30) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1uhl7b30/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/genjitoday')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/genjitoday/1617772086820/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/genjitoday
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Genji AI Bot
@genjitoday bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @genjitoday's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @genjitoday's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Fishorse • Big Baldsuya 🤖 AI Bot </div>
<div style="font-size: 15px">@gentlefishorse bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gentlefishorse's tweets](https://twitter.com/gentlefishorse).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3142 |
| Retweets | 1903 |
| Short tweets | 159 |
| Tweets kept | 1080 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2h9g07c3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gentlefishorse's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ao0ru7g8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ao0ru7g8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gentlefishorse')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gentlefishorse/1614214431723/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gentlefishorse
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Fishorse • Big Baldsuya AI Bot
@gentlefishorse bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gentlefishorse's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gentlefishorse's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/867463880582340608/b2CozYM-_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Suresh Venkatasubramanian 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@geomblog bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@geomblog's tweets](https://twitter.com/geomblog).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3199</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1301</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>178</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1720</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/5jk973vf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @geomblog's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3edsvd65) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3edsvd65/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/geomblog'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/geomblog/1600332316026/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/geomblog
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Suresh Venkatasubramanian AI Bot </div>
<div style="font-size: 15px; color: #657786">@geomblog bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @geomblog's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3199</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1301</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>178</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1720</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @geomblog's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/geomblog'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">George</div>
<div style="text-align: center; font-size: 14px;">@georgenotfound</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from George.
| Data | George |
| --- | --- |
| Tweets downloaded | 848 |
| Retweets | 6 |
| Short tweets | 310 |
| Tweets kept | 532 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2doc1coj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @georgenotfound's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/155sbgzb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/155sbgzb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/georgenotfound')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/georgenotfound/1622013920235/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/georgenotfound
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
George
@georgenotfound
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from George.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @georgenotfound's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gerard Joling</div>
<div style="text-align: center; font-size: 14px;">@gerardjoling</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gerard Joling.
| Data | Gerard Joling |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 102 |
| Short tweets | 33 |
| Tweets kept | 3115 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/nnhwkwwc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gerardjoling's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hq3zjug) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hq3zjug/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gerardjoling')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gerardjoling/1628602714633/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gerardjoling
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Gerard Joling
@gerardjoling
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Gerard Joling.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gerardjoling's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ᐸGerardSans/ᐳ🤣🇬🇧</div>
<div style="text-align: center; font-size: 14px;">@gerardsans</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ᐸGerardSans/ᐳ🤣🇬🇧.
| Data | ᐸGerardSans/ᐳ🤣🇬🇧 |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 648 |
| Short tweets | 586 |
| Tweets kept | 2016 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/115pr1rh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gerardsans's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10heg4by) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10heg4by/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gerardsans')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gerardsans/1634670781074/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gerardsans
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
ᐸGerardSans/ᐳ🇬🇧
@gerardsans
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from ᐸGerardSans/ᐳ🇬🇧.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gerardsans's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">stockhausen by proxy 🤖 AI Bot </div>
<div style="font-size: 15px">@gesualdofan666 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gesualdofan666's tweets](https://twitter.com/gesualdofan666).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3179 |
| Retweets | 242 |
| Short tweets | 715 |
| Tweets kept | 2222 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/10hehnyy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gesualdofan666's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/g22xwzgd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/g22xwzgd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gesualdofan666')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gesualdofan666/1614135333322/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gesualdofan666
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
stockhausen by proxy AI Bot
@gesualdofan666 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gesualdofan666's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gesualdofan666's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Don Hughes 🦌 🤖 AI Bot </div>
<div style="font-size: 15px">@getfiscal bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@getfiscal's tweets](https://twitter.com/getfiscal).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3221 |
| Retweets | 1002 |
| Short tweets | 409 |
| Tweets kept | 1810 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/d6p1oytn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @getfiscal's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28d4ali8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28d4ali8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/getfiscal')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/getfiscal/1616662151704/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/getfiscal
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Don Hughes AI Bot
@getfiscal bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @getfiscal's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @getfiscal's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Glenn Greenwald</div>
<div style="text-align: center; font-size: 14px;">@ggreenwald</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Glenn Greenwald.
| Data | Glenn Greenwald |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 324 |
| Short tweets | 160 |
| Tweets kept | 2764 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y433olp5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ggreenwald's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/duljho5y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/duljho5y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ggreenwald')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/ggreenwald/1643622558420/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ggreenwald
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Glenn Greenwald
@ggreenwald
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Glenn Greenwald.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ggreenwald's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Nia Hoshi✝⚸ Starlit flower child💫 🤖 AI Bot </div>
<div style="font-size: 15px">@ghoooostie bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ghoooostie's tweets](https://twitter.com/ghoooostie).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1014 |
| Retweets | 81 |
| Short tweets | 294 |
| Tweets kept | 639 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29pxu2zi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ghoooostie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/e3clb6b5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/e3clb6b5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ghoooostie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ghoooostie/1617871544860/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ghoooostie
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Nia Hoshi Starlit flower child AI Bot
@ghoooostie bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @ghoooostie's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ghoooostie's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jum</div>
<div style="text-align: center; font-size: 14px;">@ghostmountainn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jum.
| Data | jum |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 839 |
| Short tweets | 609 |
| Tweets kept | 1792 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8lx8a815/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ghostmountainn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gafkpo6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gafkpo6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ghostmountainn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ghostmountainn/1623477690371/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ghostmountainn
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
jum
@ghostmountainn
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from jum.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ghostmountainn's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gilational 🤖 AI Bot </div>
<div style="font-size: 15px">@gilational bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gilational's tweets](https://twitter.com/gilational).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 32 |
| Retweets | 0 |
| Short tweets | 1 |
| Tweets kept | 31 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3b638003/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gilational's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37tpk9wh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37tpk9wh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gilational')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gilational/1616731790752/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gilational
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gilational AI Bot
@gilational bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gilational's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gilational's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lake Yin 🤖 AI Bot </div>
<div style="font-size: 15px">@gimoyin bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gimoyin's tweets](https://twitter.com/gimoyin).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 961 |
| Retweets | 636 |
| Short tweets | 31 |
| Tweets kept | 294 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1leyvbxk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gimoyin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/y78zi5ra) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/y78zi5ra/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gimoyin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gimoyin/1614111899984/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gimoyin
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Lake Yin AI Bot
@gimoyin bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gimoyin's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gimoyin's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gingerbreadfork 🤖 AI Bot </div>
<div style="font-size: 15px">@gingerbreadfork bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gingerbreadfork's tweets](https://twitter.com/gingerbreadfork).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2680 |
| Retweets | 607 |
| Short tweets | 441 |
| Tweets kept | 1632 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bw0i5b8t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gingerbreadfork's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1eqf0r9u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1eqf0r9u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gingerbreadfork')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gingerbreadfork/1618181065321/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gingerbreadfork
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gingerbreadfork AI Bot
@gingerbreadfork bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gingerbreadfork's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gingerbreadfork's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">sátántangó nightcore 🤖 AI Bot </div>
<div style="font-size: 15px">@girlchrismarker bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@girlchrismarker's tweets](https://twitter.com/girlchrismarker).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 369 |
| Retweets | 67 |
| Short tweets | 79 |
| Tweets kept | 223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ex2qo7c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @girlchrismarker's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/e1iq56ka) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/e1iq56ka/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/girlchrismarker')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/girlchrismarker/1614168569443/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/girlchrismarker
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
sátántangó nightcore AI Bot
@girlchrismarker bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @girlchrismarker's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @girlchrismarker's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">a scared animal bites 🧷 vtuber 🤖 AI Bot </div>
<div style="font-size: 15px">@girlmeat5557 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@girlmeat5557's tweets](https://twitter.com/girlmeat5557).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 871 |
| Short tweets | 489 |
| Tweets kept | 1882 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wthiey09/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @girlmeat5557's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/io5hvymh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/io5hvymh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/girlmeat5557')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/girlmeat5557/1617790352329/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/girlmeat5557
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
a scared animal bites vtuber AI Bot
@girlmeat5557 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @girlmeat5557's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @girlmeat5557's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Anomalous Girl 🤖 AI Bot </div>
<div style="font-size: 15px">@girlshaped bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@girlshaped's tweets](https://twitter.com/girlshaped).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 304 |
| Retweets | 115 |
| Short tweets | 19 |
| Tweets kept | 170 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/35c6178z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @girlshaped's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2re3ffqt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2re3ffqt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/girlshaped')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/girlshaped/1617757456002/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/girlshaped
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Anomalous Girl AI Bot
@girlshaped bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @girlshaped's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @girlshaped's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1149580808161599488/SdEQ8RS-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1302973092332023810/K9MureTy_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lukas Valatka & Gitanas Nausėda & Aušra Maldeikienė MEP 🇱🇹🇪🇺</div>
<div style="text-align: center; font-size: 14px;">@gitanasnauseda-lukasvalatka-maldeikiene</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lukas Valatka & Gitanas Nausėda & Aušra Maldeikienė MEP 🇱🇹🇪🇺.
| Data | Lukas Valatka | Gitanas Nausėda | Aušra Maldeikienė MEP 🇱🇹🇪🇺 |
| --- | --- | --- | --- |
| Tweets downloaded | 1155 | 706 | 348 |
| Retweets | 42 | 44 | 67 |
| Short tweets | 49 | 0 | 6 |
| Tweets kept | 1064 | 662 | 275 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31ci0ia0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gitanasnauseda-lukasvalatka-maldeikiene's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/62ihbz05) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/62ihbz05/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gitanasnauseda-lukasvalatka-maldeikiene')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gitanasnauseda-lukasvalatka-maldeikiene/1620508369581/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gitanasnauseda-lukasvalatka-maldeikiene
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Lukas Valatka & Gitanas Nausėda & Aušra Maldeikienė MEP 🇱🇹🇪🇺
@gitanasnauseda-lukasvalatka-maldeikiene
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Lukas Valatka & Gitanas Nausėda & Aušra Maldeikienė MEP 🇱🇹🇪🇺.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gitanasnauseda-lukasvalatka-maldeikiene's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1302973092332023810/K9MureTy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gitanas Nausėda & Aušra Maldeikienė MEP 🇱🇹🇪🇺</div>
<div style="text-align: center; font-size: 14px;">@gitanasnauseda-maldeikiene</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gitanas Nausėda & Aušra Maldeikienė MEP 🇱🇹🇪🇺.
| Data | Gitanas Nausėda | Aušra Maldeikienė MEP 🇱🇹🇪🇺 |
| --- | --- | --- |
| Tweets downloaded | 706 | 348 |
| Retweets | 44 | 67 |
| Short tweets | 0 | 6 |
| Tweets kept | 662 | 275 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/32c03vyj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gitanasnauseda-maldeikiene's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o9iq34s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o9iq34s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gitanasnauseda-maldeikiene')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gitanasnauseda-maldeikiene/1620507874092/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gitanasnauseda-maldeikiene
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Gitanas Nausėda & Aušra Maldeikienė MEP 🇱🇹🇪🇺
@gitanasnauseda-maldeikiene
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Gitanas Nausėda & Aušra Maldeikienė MEP 🇱🇹🇪🇺.
Data: Tweets downloaded, Gitanas Nausėda: 706, Aušra Maldeikienė MEP 🇱🇹🇪🇺: 348
Data: Retweets, Gitanas Nausėda: 44, Aušra Maldeikienė MEP 🇱🇹🇪🇺: 67
Data: Short tweets, Gitanas Nausėda: 0, Aušra Maldeikienė MEP 🇱🇹🇪🇺: 6
Data: Tweets kept, Gitanas Nausėda: 662, Aušra Maldeikienė MEP 🇱🇹🇪🇺: 275
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gitanasnauseda-maldeikiene's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1338234899430600708/CGlmDFfZ_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Glacius 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@glacius_gaming bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@glacius_gaming's tweets](https://twitter.com/glacius_gaming).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3197</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>352</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>851</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1994</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3udvez1f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glacius_gaming's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1w01n2c4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1w01n2c4/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/glacius_gaming'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/glacius_gaming/1609012743315/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/glacius_gaming
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Glacius AI Bot </div>
<div style="font-size: 15px; color: #657786">@glacius_gaming bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @glacius_gaming's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3197</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>352</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>851</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1994</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @glacius_gaming's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/glacius_gaming'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">elizabeth holmes’s fetus 🤖 AI Bot </div>
<div style="font-size: 15px">@glamdemon2004 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@glamdemon2004's tweets](https://twitter.com/glamdemon2004).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3097 |
| Retweets | 550 |
| Short tweets | 345 |
| Tweets kept | 2202 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2v9xfsja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glamdemon2004's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nyv7aua) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nyv7aua/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glamdemon2004')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/glamdemon2004/1616682008766/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/glamdemon2004
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
elizabeth holmes’s fetus AI Bot
@glamdemon2004 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @glamdemon2004's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @glamdemon2004's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">tender corpse affection 🤖 AI Bot </div>
<div style="font-size: 15px">@glasseskin bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@glasseskin's tweets](https://twitter.com/glasseskin).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3213 |
| Retweets | 724 |
| Short tweets | 354 |
| Tweets kept | 2135 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2e8tgnhf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glasseskin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/198cfuf1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/198cfuf1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glasseskin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/glasseskin/1617916620472/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/glasseskin
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
tender corpse affection AI Bot
@glasseskin bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @glasseskin's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @glasseskin's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Glennys Egan (she/her) 🤖 AI Bot </div>
<div style="font-size: 15px">@gleegz bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gleegz's tweets](https://twitter.com/gleegz).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3215 |
| Retweets | 272 |
| Short tweets | 386 |
| Tweets kept | 2557 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mtxfs6h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gleegz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1d2xgejt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1d2xgejt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gleegz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gleegz/1616717872074/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gleegz
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Glennys Egan (she/her) AI Bot
@gleegz bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gleegz's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gleegz's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Inkling ꩜f Jꙮy 🔅🔎🔥 🤖 AI Bot </div>
<div style="font-size: 15px">@glitchesroux bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@glitchesroux's tweets](https://twitter.com/glitchesroux).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3178 |
| Retweets | 2579 |
| Short tweets | 105 |
| Tweets kept | 494 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1h103fds/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glitchesroux's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7rgoifll) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7rgoifll/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glitchesroux')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/glitchesroux/1616902247472/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/glitchesroux
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Inkling ꩜f Jꙮy AI Bot
@glitchesroux bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @glitchesroux's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @glitchesroux's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">💙💗🤍 Mama Ava's House of Fun 💙💗🤍</div>
<div style="text-align: center; font-size: 14px;">@glitchy22</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 💙💗🤍 Mama Ava's House of Fun 💙💗🤍.
| Data | 💙💗🤍 Mama Ava's House of Fun 💙💗🤍 |
| --- | --- |
| Tweets downloaded | 1690 |
| Retweets | 198 |
| Short tweets | 387 |
| Tweets kept | 1105 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2h5yvnyr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glitchy22's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t3bkiiv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t3bkiiv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glitchy22')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/glitchy22/1643317484748/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/glitchy22
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Mama Ava's House of Fun
@glitchy22
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Mama Ava's House of Fun .
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @glitchy22's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gabriel 🏳️🌈💦😈🌎🔥🥺 🤖 AI Bot </div>
<div style="font-size: 15px">@glockmetal bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@glockmetal's tweets](https://twitter.com/glockmetal).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3206 |
| Retweets | 290 |
| Short tweets | 921 |
| Tweets kept | 1995 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dx8iokq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glockmetal's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3s7p5y1r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3s7p5y1r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glockmetal')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/glockmetal/1617166556495/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/glockmetal
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gabriel ️ AI Bot
@glockmetal bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @glockmetal's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @glockmetal's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">GlowDonk 🤖 AI Bot </div>
<div style="font-size: 15px">@glowdonk bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@glowdonk's tweets](https://twitter.com/glowdonk).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3228 |
| Retweets | 190 |
| Short tweets | 761 |
| Tweets kept | 2277 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/sajyw4x6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glowdonk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27srcmsx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27srcmsx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glowdonk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/glowdonk/1620242160895/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/glowdonk
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GlowDonk AI Bot
@glowdonk bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @glowdonk's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @glowdonk's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">gl0w</div>
<div style="text-align: center; font-size: 14px;">@glownigga</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from gl0w.
| Data | gl0w |
| --- | --- |
| Tweets downloaded | 3132 |
| Retweets | 157 |
| Short tweets | 776 |
| Tweets kept | 2199 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3t0rqzrr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glownigga's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qjksoiw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qjksoiw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glownigga')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/glownigga/1626905715267/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/glownigga
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
gl0w
@glownigga
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from gl0w.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @glownigga's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1314997569475547137/4x1-5ejx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/858198338444836864/OFlImt8f_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Go Ando / PREDUCTS / THE GUILD & Ken McAlinn & V</div>
<div style="text-align: center; font-size: 14px;">@goando-kenmcalinn-voluntas</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Go Ando / PREDUCTS / THE GUILD & Ken McAlinn & V.
| Data | Go Ando / PREDUCTS / THE GUILD | Ken McAlinn | V |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3250 | 3246 |
| Retweets | 91 | 22 | 1040 |
| Short tweets | 1680 | 2144 | 698 |
| Tweets kept | 1476 | 1084 | 1508 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kzei9u5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goando-kenmcalinn-voluntas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mdna8jc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mdna8jc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goando-kenmcalinn-voluntas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/goando-kenmcalinn-voluntas/1643509465268/predictions.png", "widget": [{"text": "My dream is"}]}
| null |
huggingtweets/goando-kenmcalinn-voluntas
|
[
"huggingtweets",
"en",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#huggingtweets #en #region-us
|
AI CYBORG
Go Ando / PREDUCTS / THE GUILD & Ken McAlinn & V
@goando-kenmcalinn-voluntas
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Go Ando / PREDUCTS / THE GUILD & Ken McAlinn & V.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @goando-kenmcalinn-voluntas's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1145832571214815232/KYNcOP04_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1281544202627674112/zglo72WL_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch</div>
<div style="text-align: center; font-size: 14px;">@goando-tsuchinao83-za09313103</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch.
| Data | 土屋尚史 / Goodpatch | Go Ando / PREDUCTS / THE GUILD | shun nozaki / Goodpatch |
| --- | --- | --- | --- |
| Tweets downloaded | 3236 | 3250 | 798 |
| Retweets | 1577 | 97 | 34 |
| Short tweets | 914 | 1729 | 458 |
| Tweets kept | 745 | 1424 | 306 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31bsh75f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goando-tsuchinao83-za09313103's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/26i8c30r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/26i8c30r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goando-tsuchinao83-za09313103')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/goando-tsuchinao83-za09313103/1643622988627/predictions.png", "widget": [{"text": "My dream is"}]}
| null |
huggingtweets/goando-tsuchinao83-za09313103
|
[
"huggingtweets",
"en",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#huggingtweets #en #region-us
|
AI CYBORG
土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch
@goando-tsuchinao83-za09313103
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from 土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @goando-tsuchinao83-za09313103's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Go Ando / PREDUCTS / THE GUILD</div>
<div style="text-align: center; font-size: 14px;">@goando</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Go Ando / PREDUCTS / THE GUILD.
| Data | Go Ando / PREDUCTS / THE GUILD |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 91 |
| Short tweets | 1680 |
| Tweets kept | 1476 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37h8wmzh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goando's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qeev4eu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qeev4eu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goando')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/goando/1643510064373/predictions.png", "widget": [{"text": "My dream is"}]}
| null |
huggingtweets/goando
|
[
"huggingtweets",
"en",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#huggingtweets #en #region-us
|
AI BOT
Go Ando / PREDUCTS / THE GUILD
@goando
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Go Ando / PREDUCTS / THE GUILD.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @goando's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1389328774085365767/QFuxMWoj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gay Shawn 🏳️🌈 & 🔻L O W R Y 🔻</div>
<div style="text-align: center; font-size: 14px;">@goatlich-yagisabi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gay Shawn 🏳️🌈 & 🔻L O W R Y 🔻.
| Data | Gay Shawn 🏳️🌈 | 🔻L O W R Y 🔻 |
| --- | --- | --- |
| Tweets downloaded | 406 | 3156 |
| Retweets | 67 | 390 |
| Short tweets | 50 | 214 |
| Tweets kept | 289 | 2552 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wtnxwy1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goatlich-yagisabi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/qrbyfgtb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/qrbyfgtb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goatlich-yagisabi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/goatlich-yagisabi/1624475783796/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/goatlich-yagisabi
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Gay Shawn ️ & L O W R Y
@goatlich-yagisabi
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Gay Shawn ️ & L O W R Y .
Data: Tweets downloaded, Gay Shawn ️: 406, L O W R Y: 3156
Data: Retweets, Gay Shawn ️: 67, L O W R Y: 390
Data: Short tweets, Gay Shawn ️: 50, L O W R Y: 214
Data: Tweets kept, Gay Shawn ️: 289, L O W R Y: 2552
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @goatlich-yagisabi's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1326597959710994434/Mzw1eYU3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">GoDaddy Pro 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@godaddypro bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@godaddypro's tweets](https://twitter.com/godaddypro).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>654</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>86</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>23</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>545</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1axtg72y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @godaddypro's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/q9egqu3x) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/q9egqu3x/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/godaddypro'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/godaddypro/1606863861513/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/godaddypro
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">GoDaddy Pro AI Bot </div>
<div style="font-size: 15px; color: #657786">@godaddypro bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @godaddypro's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>654</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>86</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>23</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>545</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @godaddypro's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/godaddypro'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Thomas Godden 🤖 AI Bot </div>
<div style="font-size: 15px">@goddenthomas bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@goddenthomas's tweets](https://twitter.com/goddenthomas).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 308 |
| Retweets | 29 |
| Short tweets | 5 |
| Tweets kept | 274 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/i8dnp3td/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goddenthomas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34v02f8a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34v02f8a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goddenthomas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/goddenthomas/1617800973798/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/goddenthomas
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Thomas Godden AI Bot
@goddenthomas bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @goddenthomas's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @goddenthomas's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">GPT-2 Religion AI</div>
<div style="text-align: center; font-size: 14px;">@gods_txt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from GPT-2 Religion AI.
| Data | GPT-2 Religion AI |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 66 |
| Short tweets | 9 |
| Tweets kept | 3174 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/l1h0u8uh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gods_txt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2i75xs06) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2i75xs06/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gods_txt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gods_txt/1623749962893/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gods_txt
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
GPT-2 Religion AI
@gods\_txt
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from GPT-2 Religion AI.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gods\_txt's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LOVER//PARIAH</div>
<div style="text-align: center; font-size: 14px;">@godslovepariah</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LOVER//PARIAH.
| Data | LOVER//PARIAH |
| --- | --- |
| Tweets downloaded | 525 |
| Retweets | 9 |
| Short tweets | 10 |
| Tweets kept | 506 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6l5fj9xw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @godslovepariah's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/godslovepariah')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/godslovepariah/1642565537762/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/godslovepariah
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
LOVER//PARIAH
@godslovepariah
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from LOVER//PARIAH.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @godslovepariah's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1324123540556316673/YQjGLFLJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">im pete online & Grateful King</div>
<div style="text-align: center; font-size: 14px;">@gohere4porn-onlinepete</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from im pete online & Grateful King.
| Data | im pete online | Grateful King |
| --- | --- | --- |
| Tweets downloaded | 3190 | 2141 |
| Retweets | 94 | 557 |
| Short tweets | 1003 | 217 |
| Tweets kept | 2093 | 1367 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w0274vc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gohere4porn-onlinepete's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rvkp85n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rvkp85n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gohere4porn-onlinepete')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gohere4porn-onlinepete/1625638031693/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gohere4porn-onlinepete
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
im pete online & Grateful King
@gohere4porn-onlinepete
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from im pete online & Grateful King.
Data: Tweets downloaded, im pete online: 3190, Grateful King: 2141
Data: Retweets, im pete online: 94, Grateful King: 557
Data: Short tweets, im pete online: 1003, Grateful King: 217
Data: Tweets kept, im pete online: 2093, Grateful King: 1367
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gohere4porn-onlinepete's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gordon Mohr ꧁👁👁꧂ 🤖 AI Bot </div>
<div style="font-size: 15px">@gojomo bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gojomo's tweets](https://twitter.com/gojomo).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 224 |
| Short tweets | 251 |
| Tweets kept | 2773 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n28dkpx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gojomo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uusd4gca) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uusd4gca/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gojomo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gojomo
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gordon Mohr ꧁꧂ AI Bot
@gojomo bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gojomo's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gojomo's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/849638997286674433/MP_VFga5_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Sasha Goldshtein 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@goldshtn bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@goldshtn's tweets](https://twitter.com/goldshtn).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3228</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>334</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>110</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2784</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/vyukb3ol/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goldshtn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/26u1d2kp) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/26u1d2kp/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/goldshtn'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/goldshtn
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Sasha Goldshtein AI Bot </div>
<div style="font-size: 15px; color: #657786">@goldshtn bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @goldshtn's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3228</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>334</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>110</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2784</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @goldshtn's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/goldshtn'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Seth Goldwasser 🤖 AI Bot </div>
<div style="font-size: 15px">@goldwasser_seth bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@goldwasser_seth's tweets](https://twitter.com/goldwasser_seth).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 531 |
| Retweets | 8 |
| Short tweets | 76 |
| Tweets kept | 447 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e8p1yskc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goldwasser_seth's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mj33xci4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mj33xci4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goldwasser_seth')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/goldwasser_seth/1616738324749/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/goldwasser_seth
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Seth Goldwasser AI Bot
@goldwasser\_seth bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @goldwasser\_seth's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @goldwasser\_seth's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">g head 🤖 AI Bot </div>
<div style="font-size: 15px">@gonnhead bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gonnhead's tweets](https://twitter.com/gonnhead).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3209 |
| Retweets | 2404 |
| Short tweets | 400 |
| Tweets kept | 405 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fzjhi41e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gonnhead's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36u4rhhk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36u4rhhk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gonnhead')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gonnhead/1617924924473/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gonnhead
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
g head AI Bot
@gonnhead bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gonnhead's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gonnhead's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Good Tweetman</div>
<div style="text-align: center; font-size: 14px;">@goodtweet_man</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Good Tweetman.
| Data | Good Tweetman |
| --- | --- |
| Tweets downloaded | 3225 |
| Retweets | 734 |
| Short tweets | 643 |
| Tweets kept | 1848 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2czt5qbq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goodtweet_man's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tanvki3u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tanvki3u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goodtweet_man')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/goodtweet_man/1627279760723/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/goodtweet_man
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Good Tweetman
@goodtweet\_man
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Good Tweetman.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @goodtweet\_man's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1343584679664873479/Xos3xQfk_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Google 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@google bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@google's tweets](https://twitter.com/google).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3247</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>48</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>3</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3196</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ulajd1f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @google's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hx7jdkp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hx7jdkp/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/google'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/google/1609714473367/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/google
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Google AI Bot </div>
<div style="font-size: 15px; color: #657786">@google bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @google's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3247</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>48</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>3</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3196</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @google's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/google'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Google AI</div>
<div style="text-align: center; font-size: 14px;">@googleai</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Google AI.
| Data | Google AI |
| --- | --- |
| Tweets downloaded | 1754 |
| Retweets | 51 |
| Short tweets | 20 |
| Tweets kept | 1683 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/176c02iv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @googleai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3cg366zk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3cg366zk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/googleai')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/googleai/1639129810325/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/googleai
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Google AI
@googleai
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Google AI.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @googleai's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Goon 🤖 AI Bot </div>
<div style="font-size: 15px">@goon_lagoon__ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@goon_lagoon__'s tweets](https://twitter.com/goon_lagoon__).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2654 |
| Retweets | 1390 |
| Short tweets | 186 |
| Tweets kept | 1078 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11if3arq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goon_lagoon__'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1fzipcm4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1fzipcm4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goon_lagoon__')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/goon_lagoon__/1617849869460/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/goon_lagoon__
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Goon AI Bot
@goon\_lagoon\_\_ bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @goon\_lagoon\_\_'s tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @goon\_lagoon\_\_'s tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gordon Ramsay 🤖 AI Bot </div>
<div style="font-size: 15px">@gordonramsay bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gordonramsay's tweets](https://twitter.com/gordonramsay).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 269 |
| Short tweets | 206 |
| Tweets kept | 2771 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27mcq63k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gordonramsay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12n07etn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12n07etn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gordonramsay')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gordonramsay/1614174227495/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gordonramsay
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gordon Ramsay AI Bot
@gordonramsay bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gordonramsay's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gordonramsay's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gotham Sharma 🤖 AI Bot </div>
<div style="font-size: 15px">@gothamjsharma bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gothamjsharma's tweets](https://twitter.com/gothamjsharma).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3029 |
| Retweets | 1090 |
| Short tweets | 288 |
| Tweets kept | 1651 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3d2w4exv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gothamjsharma's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17mzwxqx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17mzwxqx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gothamjsharma')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gothamjsharma/1618690355639/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gothamjsharma
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gotham Sharma AI Bot
@gothamjsharma bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gothamjsharma's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gothamjsharma's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Calum Macleod</div>
<div style="text-align: center; font-size: 14px;">@gozusabu</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Calum Macleod.
| Data | Calum Macleod |
| --- | --- |
| Tweets downloaded | 1926 |
| Retweets | 673 |
| Short tweets | 279 |
| Tweets kept | 974 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y71yp06o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gozusabu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/dwp3t07q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/dwp3t07q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gozusabu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gozusabu/1627054557412/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gozusabu
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Calum Macleod
@gozusabu
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Calum Macleod.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gozusabu's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/858338118218506240/TpJ4sp1v_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Guillaume Peyronnet 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@gpeyronnet bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gpeyronnet's tweets](https://twitter.com/gpeyronnet).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3212</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>633</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>160</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2419</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1jp5vewz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gpeyronnet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2dz99sln) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2dz99sln/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gpeyronnet'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gpeyronnet
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Guillaume Peyronnet AI Bot </div>
<div style="font-size: 15px; color: #657786">@gpeyronnet bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @gpeyronnet's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3212</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>633</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>160</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2419</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @gpeyronnet's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gpeyronnet'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1317293570496266241/skdF2SBu_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">AI Wint Pontifex 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@gpt2drilpapa bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gpt2drilpapa's tweets](https://twitter.com/gpt2drilpapa).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>218</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>22</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>3</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>193</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yjlghvn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gpt2drilpapa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m5q357m9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m5q357m9/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gpt2drilpapa'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gpt2drilpapa/1611164765660/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gpt2drilpapa
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">AI Wint Pontifex AI Bot </div>
<div style="font-size: 15px; color: #657786">@gpt2drilpapa bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @gpt2drilpapa's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>218</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>22</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>3</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>193</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @gpt2drilpapa's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gpt2drilpapa'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">naomi 🍄 🌙 🤖 AI Bot </div>
<div style="font-size: 15px">@gr1my_w41fu bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gr1my_w41fu's tweets](https://twitter.com/gr1my_w41fu).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3228 |
| Retweets | 603 |
| Short tweets | 619 |
| Tweets kept | 2006 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3neoafnn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gr1my_w41fu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/yds64f47) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/yds64f47/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gr1my_w41fu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gr1my_w41fu/1617756086013/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gr1my_w41fu
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
naomi AI Bot
@gr1my\_w41fu bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gr1my\_w41fu's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gr1my\_w41fu's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">🌈ISⒶ//TED🃏🪳🍋 🤖 AI Bot </div>
<div style="font-size: 15px">@gr8ful_ted bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gr8ful_ted's tweets](https://twitter.com/gr8ful_ted).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3192 |
| Retweets | 347 |
| Short tweets | 657 |
| Tweets kept | 2188 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pvs8733/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gr8ful_ted's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/evv8duo0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/evv8duo0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gr8ful_ted')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gr8ful_ted/1614111887321/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gr8ful_ted
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
ISⒶ//TED🃏 AI Bot
@gr8ful\_ted bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gr8ful\_ted's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gr8ful\_ted's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">gracchus strupp 🤖 AI Bot </div>
<div style="font-size: 15px">@gracchusstrupp bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gracchusstrupp's tweets](https://twitter.com/gracchusstrupp).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1189 |
| Retweets | 690 |
| Short tweets | 56 |
| Tweets kept | 443 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1m083rwp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gracchusstrupp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/153lr6i9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/153lr6i9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gracchusstrupp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gracchusstrupp/1617828463761/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gracchusstrupp
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gracchus strupp AI Bot
@gracchusstrupp bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gracchusstrupp's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gracchusstrupp's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1255141505720672257/flNLLFAC_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">グランブルー EN 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@granblue_en bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@granblue_en's tweets](https://twitter.com/granblue_en).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3222</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>252</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>59</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2911</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2pwcb5ci/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @granblue_en's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2tq5wz9d) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2tq5wz9d/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/granblue_en'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/granblue_en/1600399682930/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/granblue_en
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">グランブルー EN AI Bot </div>
<div style="font-size: 15px; color: #657786">@granblue_en bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @granblue_en's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3222</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>252</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>59</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2911</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @granblue_en's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/granblue_en'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ju1ce💎</div>
<div style="text-align: center; font-size: 14px;">@grapefried</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ju1ce💎.
| Data | ju1ce💎 |
| --- | --- |
| Tweets downloaded | 2034 |
| Retweets | 504 |
| Short tweets | 403 |
| Tweets kept | 1127 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1actx5cl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @grapefried's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a1nwhd0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a1nwhd0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/grapefried')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/grapefried/1626857673378/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/grapefried
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
ju1ce
@grapefried
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from ju1ce.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @grapefried's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gray 🤖 AI Bot </div>
<div style="font-size: 15px">@grayvtuber bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@grayvtuber's tweets](https://twitter.com/grayvtuber).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 363 |
| Retweets | 14 |
| Short tweets | 52 |
| Tweets kept | 297 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rqb2jnzt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @grayvtuber's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fn16ljs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fn16ljs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/grayvtuber')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/grayvtuber/1619622413978/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/grayvtuber
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gray AI Bot
@grayvtuber bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @grayvtuber's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @grayvtuber's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000520968918/d38fd96468e9ba14c1f9f022eb0c4e61_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Great Minds Quotes 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@greatestquotes bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@greatestquotes's tweets](https://twitter.com/greatestquotes).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3202</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3201</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3unqair1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @greatestquotes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/368rnmms) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/368rnmms/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/greatestquotes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/greatestquotes/1603925133471/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/greatestquotes
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Great Minds Quotes AI Bot </div>
<div style="font-size: 15px; color: #657786">@greatestquotes bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @greatestquotes's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3202</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3201</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @greatestquotes's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/greatestquotes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1182443074963857408/PH0SGZfK_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ray Greene 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@greene_ray bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@greene_ray's tweets](https://twitter.com/greene_ray).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3187</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>867</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>334</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1986</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1njnu788/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @greene_ray's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/1cwalrjv) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/1cwalrjv/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/greene_ray'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/greene_ray/1604420107211/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/greene_ray
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ray Greene AI Bot </div>
<div style="font-size: 15px; color: #657786">@greene_ray bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @greene_ray's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3187</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>867</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>334</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1986</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @greene_ray's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/greene_ray'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gremlin ☭ 🤖 AI Bot </div>
<div style="font-size: 15px">@gremlimbs bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gremlimbs's tweets](https://twitter.com/gremlimbs).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2223 |
| Retweets | 448 |
| Short tweets | 324 |
| Tweets kept | 1451 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d7pcd3r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gremlimbs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1egm6qyj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1egm6qyj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gremlimbs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gremlimbs/1614107802037/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gremlimbs
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Gremlin AI Bot
@gremlimbs bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gremlimbs's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gremlimbs's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BON</div>
<div style="text-align: center; font-size: 14px;">@gresham2x</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BON.
| Data | BON |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 172 |
| Short tweets | 708 |
| Tweets kept | 2355 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mb1dknt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gresham2x's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kgizc73h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kgizc73h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gresham2x')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gresham2x/1623806625441/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gresham2x
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
BON
@gresham2x
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from BON.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gresham2x's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">mal shah 🤖 AI Bot </div>
<div style="font-size: 15px">@griceposting bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@griceposting's tweets](https://twitter.com/griceposting).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 247 |
| Short tweets | 357 |
| Tweets kept | 2636 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25trxjkq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @griceposting's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/q9yoq7u8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/q9yoq7u8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/griceposting')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/griceposting/1616682203001/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/griceposting
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mal shah AI Bot
@griceposting bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @griceposting's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @griceposting's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">🩸𝕮𝖚𝖑𝖙𝖘𝖚𝖑𝖙𝖆𝖓𝖙🩸 🤖 AI Bot </div>
<div style="font-size: 15px">@gritcult bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gritcult's tweets](https://twitter.com/gritcult).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 554 |
| Short tweets | 558 |
| Tweets kept | 2132 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nikyb7z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gritcult's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13st5rcg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13st5rcg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gritcult')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gritcult/1616928724478/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gritcult
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
𝕮𝖚𝖑𝖙𝖘𝖚𝖑𝖙𝖆𝖓𝖙 AI Bot
@gritcult bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gritcult's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gritcult's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">B.S.E. Guillotine Engineering 🤖 AI Bot </div>
<div style="font-size: 15px">@grubadubflub bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@grubadubflub's tweets](https://twitter.com/grubadubflub).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2543 |
| Retweets | 559 |
| Short tweets | 143 |
| Tweets kept | 1841 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1axfr66g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @grubadubflub's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vt3dbdy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vt3dbdy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/grubadubflub')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/grubadubflub/1614098423599/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/grubadubflub
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
B.S.E. Guillotine Engineering AI Bot
@grubadubflub bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @grubadubflub's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @grubadubflub's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">George Siemens 🤖 AI Bot </div>
<div style="font-size: 15px">@gsiemens bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gsiemens's tweets](https://twitter.com/gsiemens).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1291 |
| Retweets | 84 |
| Short tweets | 79 |
| Tweets kept | 1128 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39omc3b3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gsiemens's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ifsl362) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ifsl362/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gsiemens')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gsiemens/1617219776300/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gsiemens
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
George Siemens AI Bot
@gsiemens bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gsiemens's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gsiemens's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">gudapoyo 🤖 AI Bot </div>
<div style="font-size: 15px">@gudapoyo2 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gudapoyo2's tweets](https://twitter.com/gudapoyo2).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 32 |
| Short tweets | 468 |
| Tweets kept | 2707 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1duxqzag/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gudapoyo2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/22equxej) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/22equxej/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gudapoyo2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gudapoyo2/1614096751603/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gudapoyo2
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gudapoyo AI Bot
@gudapoyo2 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gudapoyo2's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gudapoyo2's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jamie Moffatt 🤖 AI Bot </div>
<div style="font-size: 15px">@guestyperson bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@guestyperson's tweets](https://twitter.com/guestyperson).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3152 |
| Retweets | 1179 |
| Short tweets | 192 |
| Tweets kept | 1781 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pvm3v6e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @guestyperson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nuca4qh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nuca4qh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/guestyperson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/guestyperson/1614136556129/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/guestyperson
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jamie Moffatt AI Bot
@guestyperson bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @guestyperson's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @guestyperson's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/976898364901134338/IOR5RTSc_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Sylvain Gugger 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@guggersylvain bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@guggersylvain's tweets](https://twitter.com/guggersylvain).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>571</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>202</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>31</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>338</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/32frx4d8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @guggersylvain's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/21uu01o9) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/21uu01o9/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/guggersylvain'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/guggersylvain
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Sylvain Gugger AI Bot </div>
<div style="font-size: 15px; color: #657786">@guggersylvain bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @guggersylvain's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>571</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>202</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>31</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>338</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @guggersylvain's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/guggersylvain'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Guillermo Angeris 🤖 AI Bot </div>
<div style="font-size: 15px">@guilleangeris bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@guilleangeris's tweets](https://twitter.com/guilleangeris).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 273 |
| Short tweets | 303 |
| Tweets kept | 2654 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tg19y8a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @guilleangeris's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2shp18hb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2shp18hb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/guilleangeris')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/guilleangeris/1616612740170/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/guilleangeris
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Guillermo Angeris AI Bot
@guilleangeris bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @guilleangeris's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @guilleangeris's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mayor Guy Fieri</div>
<div style="text-align: center; font-size: 14px;">@guyfieri</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mayor Guy Fieri.
| Data | Mayor Guy Fieri |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 978 |
| Short tweets | 132 |
| Tweets kept | 2138 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19tc6yav/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @guyfieri's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nefj2bb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nefj2bb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/guyfieri')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/guyfieri/1663110657180/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/guyfieri
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Mayor Guy Fieri
@guyfieri
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Mayor Guy Fieri.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @guyfieri's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">GuyFoxDay 🤖 AI Bot </div>
<div style="font-size: 15px">@guyfoxday bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@guyfoxday's tweets](https://twitter.com/guyfoxday).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 561 |
| Short tweets | 316 |
| Tweets kept | 2367 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/o580jv43/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @guyfoxday's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rf94m3w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rf94m3w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/guyfoxday')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/guyfoxday/1617809504933/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/guyfoxday
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GuyFoxDay AI Bot
@guyfoxday bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @guyfoxday's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @guyfoxday's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">GuyWithThePie (🎂 in 1 week)</div>
<div style="text-align: center; font-size: 14px;">@guywiththepie</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from GuyWithThePie (🎂 in 1 week).
| Data | GuyWithThePie (🎂 in 1 week) |
| --- | --- |
| Tweets downloaded | 3204 |
| Retweets | 445 |
| Short tweets | 422 |
| Tweets kept | 2337 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lir19ia/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @guywiththepie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ru7uv7v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ru7uv7v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/guywiththepie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/guywiththepie/1627573203188/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/guywiththepie
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
GuyWithThePie ( in 1 week)
@guywiththepie
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from GuyWithThePie ( in 1 week).
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @guywiththepie's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/424495004/GuidoAvatar_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Guido van Rossum 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@gvanrossum bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gvanrossum's tweets](https://twitter.com/gvanrossum).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3192</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>166</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>169</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2857</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cyt5kq0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gvanrossum's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rte53sg6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rte53sg6/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gvanrossum'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gvanrossum/1605218553043/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gvanrossum
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Guido van Rossum AI Bot </div>
<div style="font-size: 15px; color: #657786">@gvanrossum bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @gvanrossum's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3192</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>166</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>169</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2857</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @gvanrossum's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/gvanrossum'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Anarcho-Gwendolism 🧬 🤖 AI Bot </div>
<div style="font-size: 15px">@gwenvara_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gwenvara_'s tweets](https://twitter.com/gwenvara_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3069 |
| Retweets | 1831 |
| Short tweets | 350 |
| Tweets kept | 888 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/p9ao8jnc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gwenvara_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/l9zed4di) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/l9zed4di/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gwenvara_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/gwenvara_/1616736053941/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/gwenvara_
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Anarcho-Gwendolism AI Bot
@gwenvara\_ bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @gwenvara\_'s tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @gwenvara\_'s tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/993273677386059777/TngqqZck_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Frank Soboczenski 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@h21k bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@h21k's tweets](https://twitter.com/h21k).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>204</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>176</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3vw58heg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @h21k's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/15xkammd) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/15xkammd/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/h21k'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/h21k/1602301931118/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/h21k
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Frank Soboczenski AI Bot </div>
<div style="font-size: 15px; color: #657786">@h21k bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @h21k's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>204</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>176</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @h21k's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/h21k'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n![Follow](URL\n\n<section class='prose'>\nFor more details, visit the project repository.\n</section>\n\n![GitHub stars](URL"
] |
[
57,
34,
427,
75,
9,
167,
48,
58
] |
[
"passage: TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n## How does it work?\n\nThe model uses the following pipeline.\n\n!pipeline\n\nTo understand how the model was developed, check the W&B report."
] |
[
-0.04084593802690506,
0.035596683621406555,
-0.0024457171093672514,
0.04662978649139404,
0.10991521924734116,
0.022836215794086456,
0.12812861800193787,
0.0424627922475338,
-0.03746044635772705,
-0.03597303107380867,
0.22758877277374268,
0.1009177565574646,
0.03089720755815506,
0.17962171137332916,
0.010350672528147697,
-0.2703946828842163,
0.015237200073897839,
0.0647135004401207,
-0.07720091193914413,
0.15752871334552765,
0.05562684312462807,
-0.049801189452409744,
0.08214939385652542,
0.032038331031799316,
-0.165513277053833,
-0.004831716418266296,
-0.02072383277118206,
-0.04504403471946716,
0.09232694655656815,
0.06956911832094193,
0.07011176645755768,
0.034282486885786057,
0.017851393669843674,
-0.0714372918009758,
0.06354191154241562,
0.014452377334237099,
-0.02349161170423031,
0.13615116477012634,
0.028668763116002083,
-0.0002947957837022841,
0.1798527091741562,
0.11815319955348969,
0.016722125932574272,
0.016196802258491516,
-0.1166125237941742,
-0.0606788769364357,
0.012365915812551975,
0.04470464214682579,
0.10005760192871094,
0.058856748044490814,
0.01943058706820011,
0.13215520977973938,
-0.1164829432964325,
0.08512931317090988,
0.1828341782093048,
-0.24191047251224518,
-0.006823094096034765,
0.061043880879879,
0.08518847078084946,
0.02852868288755417,
-0.0027500083670020103,
0.050437480211257935,
0.06484624743461609,
0.021347152069211006,
0.03636380657553673,
-0.0638979896903038,
0.05009674280881882,
0.011186900548636913,
-0.10215270519256592,
-0.07660897821187973,
0.2294245809316635,
-0.0602995827794075,
0.004886541981250048,
-0.028186868876218796,
-0.08490151911973953,
-0.06026294082403183,
-0.012795685790479183,
-0.05738116800785065,
-0.01700775697827339,
0.034475523978471756,
0.0214629378169775,
-0.10922146588563919,
-0.07046429812908173,
-0.1146778091788292,
-0.09578189998865128,
0.17879855632781982,
-0.02900231070816517,
0.08940442651510239,
-0.2603394091129303,
0.2254239320755005,
0.07019765675067902,
-0.11909467726945877,
0.04757627099752426,
-0.11814092099666595,
0.08530338853597641,
0.02207805961370468,
0.041434839367866516,
0.07761925458908081,
0.04161522537469864,
0.11940909177064896,
0.03257475048303604,
-0.00314142694696784,
0.05456724017858505,
0.07224375009536743,
0.0941813513636589,
0.1230122447013855,
-0.07287998497486115,
-0.07873915135860443,
0.08034131675958633,
-0.04709449037909508,
-0.11787727475166321,
-0.06861002743244171,
-0.1452382206916809,
-0.004202034790068865,
-0.0335184708237648,
0.07922724634408951,
0.05637102574110031,
0.09526676684617996,
-0.01433930266648531,
-0.05777551606297493,
0.04157217592000961,
-0.06985066086053848,
0.019995175302028656,
-0.011708668433129787,
-0.060763102024793625,
0.13121068477630615,
0.04688004031777382,
-0.014275136403739452,
-0.08673419058322906,
0.07951672375202179,
-0.1204570084810257,
-0.07978847622871399,
-0.08239417523145676,
-0.05098734796047211,
-0.007686138618737459,
-0.11281027644872665,
0.049203939735889435,
-0.11458899080753326,
-0.22705501317977905,
-0.01313747651875019,
0.04550096020102501,
-0.016462473198771477,
-0.03707117214798927,
-0.040630146861076355,
-0.009473399259150028,
0.04880702868103981,
-0.042894408106803894,
0.039052072912454605,
-0.05325083062052727,
0.050018906593322754,
-0.09618895500898361,
0.051321301609277725,
-0.10470817238092422,
0.041251372545957565,
-0.09671807289123535,
0.07595963031053543,
0.0017866486450657248,
0.0454767569899559,
0.010063555091619492,
0.08985432982444763,
-0.03214199095964432,
-0.044836416840553284,
-0.07869677990674973,
0.026561295613646507,
0.02284199558198452,
0.20134314894676208,
-0.10196486860513687,
-0.0819794163107872,
0.12614667415618896,
-0.07252102345228195,
-0.1288122832775116,
0.0409054271876812,
-0.02831157296895981,
0.17958028614521027,
0.07370224595069885,
0.16096454858779907,
0.12039237469434738,
-0.03798284754157066,
0.1263856589794159,
0.14821170270442963,
-0.1269366294145584,
-0.004153167363256216,
0.039416827261447906,
0.014042570255696774,
-0.20559121668338776,
0.04052022844552994,
-0.01946125738322735,
0.06584736704826355,
-0.10201486945152283,
-0.00946728140115738,
0.0031527038663625717,
-0.02320384420454502,
0.0023026331327855587,
-0.0472555048763752,
0.061078161001205444,
0.038737643510103226,
-0.022763127461075783,
0.029682613909244537,
0.06635552644729614,
-0.026160234585404396,
-0.00823320634663105,
-0.04484035447239876,
0.10009831935167313,
-0.07260703295469284,
0.06797321885824203,
-0.13160665333271027,
-0.003086181590333581,
-0.012450510635972023,
0.0972909927368164,
0.03811546042561531,
0.10621625185012817,
0.05470104143023491,
0.03126294165849686,
0.07701993733644485,
-0.02846951223909855,
0.0746634304523468,
0.01002051867544651,
-0.0850096121430397,
-0.1272277683019638,
0.015217535197734833,
-0.10649837553501129,
-0.004191380459815264,
-0.08523700386285782,
-0.0015737697249278426,
-0.12802091240882874,
0.05826076865196228,
-0.016465239226818085,
0.0539434514939785,
-0.055182818323373795,
-0.04337453469634056,
-0.044662054628133774,
-0.022007398307323456,
0.0482511967420578,
-0.033139705657958984,
-0.06687536835670471,
0.16832175850868225,
-0.15395553410053253,
0.2592274248600006,
0.1313537061214447,
-0.0889907032251358,
0.002020547864958644,
-0.07812081277370453,
-0.04194976016879082,
-0.012868959456682205,
0.07527336478233337,
-0.03700125962495804,
0.15495356917381287,
-0.03386903926730156,
0.17382068932056427,
-0.099679134786129,
-0.0334031768143177,
0.02144831046462059,
-0.10981054604053497,
0.057656705379486084,
0.08051177859306335,
0.04427298903465271,
-0.159133642911911,
0.08837400376796722,
0.19756358861923218,
0.05472150072455406,
0.20321963727474213,
-0.006869805511087179,
-0.06112205237150192,
-0.05358194187283516,
-0.0808696523308754,
-0.052568696439266205,
0.056259434670209885,
-0.09903126955032349,
-0.004000484477728605,
0.06423190236091614,
0.08783505111932755,
0.11237549781799316,
-0.10904275625944138,
-0.046180129051208496,
0.05125856027007103,
-0.004819708876311779,
-0.051060013473033905,
0.07006146013736725,
-0.0659489631652832,
0.13217470049858093,
0.014598124660551548,
-0.07049204409122467,
0.0036125897895544767,
-0.004913401324301958,
-0.11891388893127441,
0.20653130114078522,
-0.08047540485858917,
-0.27306002378463745,
-0.16792123019695282,
-0.16288253664970398,
0.07165426760911942,
0.038431257009506226,
0.033738572150468826,
-0.08776884526014328,
-0.020982403308153152,
0.004409478977322578,
0.11553267389535904,
-0.09698133170604706,
0.013121976517140865,
0.008159824647009373,
-0.018650712445378304,
-0.07579360157251358,
-0.09033482521772385,
-0.0241270512342453,
-0.02461584471166134,
0.020020704716444016,
0.03998296707868576,
-0.11154978722333908,
0.06758414953947067,
0.2167699933052063,
-0.015538511797785759,
0.06870997697114944,
0.00025148785789497197,
0.26176807284355164,
-0.08426473289728165,
0.040830448269844055,
0.11926601082086563,
-0.08760137856006622,
0.05199241638183594,
0.07132956385612488,
0.03210015222430229,
-0.014074578881263733,
0.016441889107227325,
-0.11233895272016525,
-0.12864868342876434,
-0.1923626959323883,
-0.06961654871702194,
-0.028241310268640518,
0.13464264571666718,
0.031150488182902336,
0.04321796074509621,
0.10346641391515732,
0.07471037656068802,
0.06701335310935974,
0.03259968012571335,
-0.0005120337591506541,
0.0647427961230278,
0.024594781920313835,
-0.05812343955039978,
0.054217349737882614,
-0.04845457896590233,
-0.0797470211982727,
0.08279551565647125,
-0.011098933406174183,
0.0927528515458107,
0.06928195804357529,
0.02340286411345005,
0.018686039373278618,
0.04218229651451111,
0.15593960881233215,
0.22442668676376343,
-0.012412761338055134,
-0.041085485368967056,
-0.05078154057264328,
-0.040494389832019806,
-0.01600850187242031,
0.015044075436890125,
-0.05785144492983818,
-0.033252447843551636,
-0.0728597640991211,
-0.015066487714648247,
0.011195010505616665,
0.015441779047250748,
0.07578693330287933,
-0.22024130821228027,
-0.038240667432546616,
0.042616840451955795,
-0.013794191181659698,
-0.10639895498752594,
0.05872863903641701,
0.016779562458395958,
-0.17391349375247955,
-0.07854076474905014,
-0.016605399549007416,
0.1603294163942337,
-0.030760308727622032,
0.0619782954454422,
0.005449770484119654,
0.02271227352321148,
-0.013140208087861538,
0.11191333085298538,
-0.27346712350845337,
0.1954270750284195,
0.001131516881287098,
-0.04876048117876053,
-0.016439033672213554,
-0.04243995249271393,
0.0009058643481694162,
0.14556926488876343,
0.09718295931816101,
0.0028763783629983664,
0.0669604167342186,
-0.07678256928920746,
-0.11943262070417404,
0.05284353718161583,
0.08068333566188812,
-0.07738065719604492,
0.029960619285702705,
-0.029798466712236404,
0.027152907103300095,
-0.007555682212114334,
-0.030231619253754616,
0.002119861776009202,
-0.11661309748888016,
0.02936525270342827,
-0.08075195550918579,
0.06012337654829025,
0.02433968149125576,
-0.02529163844883442,
-0.012048180215060711,
0.1316436529159546,
-0.013300766237080097,
-0.08264251798391342,
-0.08976204693317413,
-0.02328740619122982,
0.09523095935583115,
-0.05599937587976456,
0.03358715400099754,
-0.08175740391016006,
-0.04073614999651909,
0.005860272329300642,
-0.16970814764499664,
0.06983034312725067,
-0.10846570879220963,
-0.09971687942743301,
-0.050264790654182434,
0.15346404910087585,
0.013677009381353855,
0.025709833949804306,
0.03220117464661598,
-0.04211581498384476,
-0.18150363862514496,
-0.15989434719085693,
-0.007562890648841858,
0.0717545747756958,
-0.04433317109942436,
0.03638565540313721,
0.007171243894845247,
0.10013602674007416,
0.004198792390525341,
0.07230839878320694,
0.2026015669107437,
0.16423118114471436,
-0.08760133385658264,
0.17723721265792847,
0.16266676783561707,
-0.12243213504552841,
-0.2722402811050415,
-0.09522651135921478,
-0.05925937369465828,
0.03468820080161095,
0.02297091670334339,
-0.13072867691516876,
0.06184706464409828,
-0.011241482570767403,
-0.004976592492312193,
0.13391432166099548,
-0.2790721356868744,
-0.07025358080863953,
0.13864430785179138,
-0.012145180255174637,
0.2560276985168457,
-0.042459286749362946,
-0.08155408501625061,
-0.060940731316804886,
-0.2339130938053131,
0.1595010906457901,
-0.12908293306827545,
0.030256805941462517,
-0.06380902975797653,
0.1317017376422882,
0.04475972056388855,
-0.051817599684000015,
0.13714583218097687,
-0.0770399421453476,
0.03692200407385826,
-0.1231972947716713,
-0.01437266543507576,
0.05212629213929176,
-0.014681367203593254,
0.10554680228233337,
-0.053141020238399506,
0.10400939732789993,
-0.12106935679912567,
-0.052672889083623886,
-0.054288461804389954,
0.017598338425159454,
-0.023758167400956154,
-0.05668776109814644,
-0.039483629167079926,
-0.05230721831321716,
0.00942184031009674,
-0.024894973263144493,
-0.008981208316981792,
-0.02189256064593792,
0.08200293034315109,
0.10853444039821625,
0.1416669338941574,
-0.04508063197135925,
-0.02666328102350235,
-0.029412275180220604,
-0.043095141649246216,
0.07755832374095917,
-0.1675589680671692,
-0.020979177206754684,
0.15767353773117065,
0.008264025673270226,
0.08081416040658951,
0.07994852215051651,
-0.043529048562049866,
-0.04116993397474289,
0.09435915946960449,
-0.23738352954387665,
-0.032961416989564896,
-0.07289689034223557,
-0.032304681837558746,
0.05143286660313606,
0.06389017403125763,
0.11233682930469513,
-0.055076416581869125,
-0.015500548295676708,
0.038369257003068924,
-0.013473432511091232,
-0.10457789897918701,
0.12659704685211182,
0.07594829052686691,
0.04931824654340744,
-0.13000807166099548,
0.03979043290019035,
-0.02080575004220009,
-0.024042857810854912,
-0.009190280921757221,
0.09610513597726822,
-0.13868926465511322,
-0.061987441033124924,
0.01100219041109085,
0.1624082624912262,
-0.08940329402685165,
-0.054934311658144,
-0.00678250240162015,
-0.07782098650932312,
0.06215988099575043,
0.06269455701112747,
0.039047662168741226,
0.10006190836429596,
-0.08492296934127808,
-0.004345493856817484,
-0.04427671059966087,
0.02742549031972885,
0.04004936292767525,
-0.01839151792228222,
-0.11644710600376129,
0.050648268312215805,
0.01261399406939745,
0.21786263585090637,
-0.12195943295955658,
-0.07748695462942123,
-0.13975368440151215,
0.03579137846827507,
-0.1441981941461563,
-0.02782432734966278,
-0.09455464035272598,
-0.0542730838060379,
-0.024786408990621567,
-0.02354593575000763,
-0.05044161155819893,
-0.03595460206270218,
-0.06568260490894318,
0.04963921010494232,
-0.01889806240797043,
-0.04201965406537056,
-0.018809955567121506,
0.04780932888388634,
0.10624072700738907,
-0.0022816911805421114,
0.11582330614328384,
0.10476028919219971,
-0.06149300932884216,
0.06964143365621567,
-0.08975338935852051,
0.049342647194862366,
0.010800108313560486,
-0.03639211133122444,
0.07890737056732178,
0.033158838748931885,
0.011678727343678474,
-0.02014644630253315,
-0.05248590186238289,
0.015699470415711403,
0.019494805485010147,
-0.09001129865646362,
0.04338252544403076,
0.03427375108003616,
-0.07128193974494934,
-0.06945458799600601,
-0.02831537090241909,
-0.04915383830666542,
0.10966338962316513,
0.09227382391691208,
0.01580313965678215,
0.11524862796068192,
-0.09982031583786011,
-0.0043287696316838264,
0.0288130734115839,
-0.08074736595153809,
-0.01706261746585369,
-0.10044533759355545,
-0.01304725930094719,
-0.02274717018008232,
0.2554529011249542,
0.12089171260595322,
-0.025309232994914055,
-0.03230812028050423,
0.07114472985267639,
0.08105676621198654,
-0.0211230106651783,
0.14824873208999634,
0.03444083034992218,
-0.0007331980159506202,
-0.1400776505470276,
0.10673409700393677,
-0.060156434774398804,
-0.010151425376534462,
0.09550673514604568,
-0.08319920301437378,
0.048856139183044434,
0.07468824833631516,
-0.01950058713555336,
0.05372466519474983,
-0.11716536432504654,
-0.2690386474132538,
0.023945249617099762,
0.027653370052576065,
-0.0441947840154171,
0.07253700494766235,
0.145015150308609,
0.00042942073196172714,
0.05244648456573486,
0.061493102461099625,
-0.05709811672568321,
-0.17804701626300812,
-0.19115881621837616,
-0.0384756401181221,
-0.11082857847213745,
-0.023826930671930313,
-0.10639674216508865,
0.04148538038134575,
-0.02072913944721222,
0.05925795063376427,
-0.09639845043420792,
0.12447383254766464,
0.06843417137861252,
-0.11577396094799042,
0.05810433253645897,
-0.008805959485471249,
0.048459235578775406,
-0.07387776672840118,
0.08210063725709915,
-0.10721065104007721,
-0.026499031111598015,
-0.016933415085077286,
0.03711435943841934,
-0.05858420953154564,
0.0011270071845501661,
-0.10357651859521866,
-0.06808403134346008,
-0.056935109198093414,
0.09072309732437134,
-0.024477418512105942,
0.03998230770230293,
-0.014557241462171078,
-0.061277762055397034,
-0.025446701794862747,
0.2273169606924057,
-0.018587565049529076,
-0.043939489871263504,
-0.0661960318684578,
0.2851298749446869,
-0.06544138491153717,
0.07253559678792953,
-0.032977886497974396,
-0.001274158013984561,
-0.07127615064382553,
0.2931469976902008,
0.36314713954925537,
-0.14264726638793945,
0.011796033009886742,
-0.018389053642749786,
0.03556118905544281,
0.07535336911678314,
0.18024654686450958,
0.07291083037853241,
0.3107033371925354,
-0.04080776497721672,
-0.01225926261395216,
-0.10546047985553741,
-0.03835856914520264,
0.014304363168776035,
0.02947218343615532,
0.08378855139017105,
-0.05586446449160576,
-0.06808875501155853,
0.1039084792137146,
-0.26703301072120667,
-0.02056516334414482,
-0.16380304098129272,
-0.061613935977220535,
-0.04166705906391144,
0.0007227687747217715,
0.07237391918897629,
0.028740311041474342,
0.05115301162004471,
-0.039005450904369354,
-0.047156207263469696,
0.057444483041763306,
-0.02154913917183876,
-0.12674635648727417,
0.0002557095722295344,
0.143532857298851,
-0.07906237244606018,
-0.0018181405030190945,
0.0032308290246874094,
0.060348983854055405,
0.044118594378232956,
0.04119637981057167,
-0.10164451599121094,
0.02608482725918293,
0.01246592216193676,
-0.03363148868083954,
-0.028164468705654144,
0.008156497962772846,
0.07835527509450912,
-0.21697945892810822,
0.0020338469184935093,
-0.14078554511070251,
0.011757226660847664,
-0.07641053944826126,
-0.006896127946674824,
-0.08222074061632156,
0.03242125362157822,
0.004625517874956131,
0.1118803396821022,
0.11125602573156357,
-0.03202005848288536,
-0.0006144302315078676,
-0.06265610456466675,
0.06727221608161926,
-0.06884542852640152,
-0.02960195019841194,
-0.025150567293167114,
-0.09257599711418152,
-0.09335606545209885,
0.09815482050180435,
-0.022339481860399246,
-0.1427105814218521,
0.007601875811815262,
-0.09401176869869232,
-0.04369132220745087,
-0.021486658602952957,
0.09382037818431854,
0.11086808145046234,
0.09180203825235367,
-0.007599277421832085,
0.047748953104019165,
0.03120456263422966,
0.07436691224575043,
-0.12886843085289001,
-0.10148585587739944
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.