sha
null | last_modified
null | library_name
stringclasses 154
values | text
stringlengths 1
900k
| metadata
stringlengths 2
348k
| pipeline_tag
stringclasses 45
values | id
stringlengths 5
122
| tags
listlengths 1
1.84k
| created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
| tokens_length
listlengths 1
723
| input_texts
listlengths 1
61
| embeddings
listlengths 768
768
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
transformers
|
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1333105810851975169/duOCN2P4_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">David desJardins 🤖 AI Bot </div>
<div style="font-size: 15px">@david_desj bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@david_desj's tweets](https://twitter.com/david_desj).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 41 |
| Short tweets | 23 |
| Tweets kept | 3186 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/391ogow1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @david_desj's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qeggjsp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qeggjsp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/david_desj')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/david_desj/1616642206560/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/david_desj
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
David desJardins AI Bot
@david\_desj bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @david\_desj's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @david\_desj's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/855423041161105408/f8QTAXnm_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Daviz 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@david_rccv bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@david_rccv's tweets](https://twitter.com/david_rccv).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3117</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1335</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>187</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1595</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2u4p05g9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @david_rccv's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2njpujl0) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2njpujl0/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/david_rccv'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/david_rccv/1602336344954/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/david_rccv
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Daviz AI Bot </div>
<div style="font-size: 15px; color: #657786">@david_rccv bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @david_rccv's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3117</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1335</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>187</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1595</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @david_rccv's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/david_rccv'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/745895338003828736/rrplzLVB_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">David Gasquez 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@davidgasquez bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@davidgasquez's tweets](https://twitter.com/davidgasquez).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3065</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>674</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>66</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2325</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/26tasjrn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davidgasquez's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3ez0xoyl) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3ez0xoyl/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/davidgasquez'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/davidgasquez/1600679713505/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/davidgasquez
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">David Gasquez AI Bot </div>
<div style="font-size: 15px; color: #657786">@davidgasquez bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @davidgasquez's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3065</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>674</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>66</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2325</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @davidgasquez's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/davidgasquez'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/792165528752140288/liCCmoI2_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">David Goggins 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@davidgoggins bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@davidgoggins's tweets](https://twitter.com/davidgoggins).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>557</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>10</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>75</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>472</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3bgqr5vh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davidgoggins's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/13i4mcyp) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/13i4mcyp/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/davidgoggins'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/davidgoggins/1603830361250/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/davidgoggins
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">David Goggins AI Bot </div>
<div style="font-size: 15px; color: #657786">@davidgoggins bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @davidgoggins's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>557</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>10</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>75</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>472</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @davidgoggins's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/davidgoggins'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mighty</div>
<div style="text-align: center; font-size: 14px;">@davidlisowsky</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mighty.
| Data | Mighty |
| --- | --- |
| Tweets downloaded | 156 |
| Retweets | 53 |
| Short tweets | 15 |
| Tweets kept | 88 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hyhz2eh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davidlisowsky's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2v0yazpf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2v0yazpf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/davidlisowsky')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/davidlisowsky/1631500152718/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/davidlisowsky
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Mighty
@davidlisowsky
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Mighty.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @davidlisowsky's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">David R. Liu</div>
<div style="text-align: center; font-size: 14px;">@davidrliu</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from David R. Liu.
| Data | David R. Liu |
| --- | --- |
| Tweets downloaded | 2124 |
| Retweets | 952 |
| Short tweets | 62 |
| Tweets kept | 1110 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29r3m2zm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davidrliu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27i98foi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27i98foi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/davidrliu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/davidrliu/1622126441318/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/davidrliu
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
David R. Liu
@davidrliu
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from David R. Liu.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @davidrliu's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">David Vizgan</div>
<div style="text-align: center; font-size: 14px;">@davidvizgan</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from David Vizgan.
| Data | David Vizgan |
| --- | --- |
| Tweets downloaded | 2075 |
| Retweets | 563 |
| Short tweets | 302 |
| Tweets kept | 1210 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36psf0c4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davidvizgan's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23t5t1ij) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23t5t1ij/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/davidvizgan')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/davidvizgan/1623099475956/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/davidvizgan
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
David Vizgan
@davidvizgan
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from David Vizgan.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @davidvizgan's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">dawniee🌸☁️🌈 🤖 AI Bot </div>
<div style="font-size: 15px">@dawnieedreams bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dawnieedreams's tweets](https://twitter.com/dawnieedreams).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3225 |
| Retweets | 488 |
| Short tweets | 441 |
| Tweets kept | 2296 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bnevhdny/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dawnieedreams's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vytig6y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vytig6y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dawnieedreams')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dawnieedreams
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
dawniee️ AI Bot
@dawnieedreams bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dawnieedreams's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dawnieedreams's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Devlet Bahçeli</div>
<div style="text-align: center; font-size: 14px;">@dbdevletbahceli</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Devlet Bahçeli.
| Data | Devlet Bahçeli |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 0 |
| Short tweets | 19 |
| Tweets kept | 3181 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ni0ttu3d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dbdevletbahceli's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ois198tw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ois198tw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dbdevletbahceli')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dbdevletbahceli/1625817202615/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dbdevletbahceli
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Devlet Bahçeli
@dbdevletbahceli
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Devlet Bahçeli.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dbdevletbahceli's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dale Dorsey 🤖 AI Bot </div>
<div style="font-size: 15px">@dd0031 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dd0031's tweets](https://twitter.com/dd0031).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 177 |
| Retweets | 65 |
| Short tweets | 18 |
| Tweets kept | 94 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3hyz1y2u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dd0031's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/82633lpl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/82633lpl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dd0031')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dd0031/1616729814364/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dd0031
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Dale Dorsey AI Bot
@dd0031 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dd0031's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dd0031's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1166296863068360704/9Rbf-i7O_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ddlc quote bot 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@ddlcquotes bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ddlcquotes's tweets](https://twitter.com/ddlcquotes).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3203</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>27</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3176</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vugceit/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ddlcquotes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1rh6mzov) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1rh6mzov/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ddlcquotes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ddlcquotes/1612815814568/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ddlcquotes
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ddlc quote bot AI Bot </div>
<div style="font-size: 15px; color: #657786">@ddlcquotes bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @ddlcquotes's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3203</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>27</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3176</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @ddlcquotes's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ddlcquotes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">persona non greta</div>
<div style="text-align: center; font-size: 14px;">@dead__bug</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from persona non greta.
| Data | persona non greta |
| --- | --- |
| Tweets downloaded | 3095 |
| Retweets | 449 |
| Short tweets | 623 |
| Tweets kept | 2023 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/oyzjw1jc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dead__bug's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20dghuyx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20dghuyx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dead__bug')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dead__bug/1627231954071/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dead__bug
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
persona non greta
@dead\_\_bug
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from persona non greta.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dead\_\_bug's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">DEA HQ 🤖 AI Bot </div>
<div style="font-size: 15px">@deahq bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deahq's tweets](https://twitter.com/deahq).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3226 |
| Retweets | 1568 |
| Short tweets | 25 |
| Tweets kept | 1633 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1th11ksq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deahq's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vxabqh4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vxabqh4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deahq')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deahq
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
DEA HQ AI Bot
@deahq bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deahq's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deahq's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1172237959032201219/8tZPfA9n_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">💟 Dealing Porn 💟 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@dealingporn bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dealingporn's tweets](https://twitter.com/dealingporn).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3213</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>2141</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1072</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/10cc3bw9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dealingporn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2d2vjn6v) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2d2vjn6v/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dealingporn'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dealingporn
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800"> Dealing Porn AI Bot </div>
<div style="font-size: 15px; color: #657786">@dealingporn bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @dealingporn's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3213</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>2141</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1072</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @dealingporn's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dealingporn'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Death Battle Bot</div>
<div style="text-align: center; font-size: 14px;">@deathbattlebot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Death Battle Bot.
| Data | Death Battle Bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 1 |
| Short tweets | 20 |
| Tweets kept | 3229 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hcf8oqg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deathbattlebot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2d0vuhj5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2d0vuhj5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deathbattlebot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deathbattlebot/1626676974616/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deathbattlebot
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Death Battle Bot
@deathbattlebot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Death Battle Bot.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deathbattlebot's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">numen 🤖 AI Bot </div>
<div style="font-size: 15px">@decadantism bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@decadantism's tweets](https://twitter.com/decadantism).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1922 |
| Retweets | 50 |
| Short tweets | 142 |
| Tweets kept | 1730 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28bmv6dp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @decadantism's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1657a5q3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1657a5q3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/decadantism')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/decadantism/1617759987918/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/decadantism
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
numen AI Bot
@decadantism bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @decadantism's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @decadantism's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1219604909382754304/dP1klRbB_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">decodem.ai 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@decodemai bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@decodemai's tweets](https://twitter.com/decodemai).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>97</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>3</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>17</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>77</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jjvvorob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @decodemai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ilv1sdu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ilv1sdu/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/decodemai'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/decodemai/1609942356404/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/decodemai
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">URL AI Bot </div>
<div style="font-size: 15px; color: #657786">@decodemai bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @decodemai's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>97</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>3</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>17</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>77</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @decodemai's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/decodemai'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">rei ayanami appreciator 🤖 AI Bot </div>
<div style="font-size: 15px">@decoratedboar bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@decoratedboar's tweets](https://twitter.com/decoratedboar).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3226 |
| Retweets | 1039 |
| Short tweets | 408 |
| Tweets kept | 1779 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jw8lo8p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @decoratedboar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2opldfex) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2opldfex/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/decoratedboar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/decoratedboar/1617764745447/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/decoratedboar
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
rei ayanami appreciator AI Bot
@decoratedboar bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @decoratedboar's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @decoratedboar's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sadie 🦝</div>
<div style="text-align: center; font-size: 14px;">@deddogoon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sadie 🦝.
| Data | sadie 🦝 |
| --- | --- |
| Tweets downloaded | 2566 |
| Retweets | 461 |
| Short tweets | 788 |
| Tweets kept | 1317 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xu99dv1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deddogoon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2i09ou5n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2i09ou5n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deddogoon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/deddogoon/1651202174338/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deddogoon
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
sadie
@deddogoon
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from sadie .
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deddogoon's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">👁️ Deep Thrill 😎 🤖 AI Bot </div>
<div style="font-size: 15px">@deeperthrill bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deeperthrill's tweets](https://twitter.com/deeperthrill).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 2415 |
| Short tweets | 165 |
| Tweets kept | 655 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3t139cp8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deeperthrill's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9rc0g39n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9rc0g39n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deeperthrill')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deeperthrill/1616001334930/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deeperthrill
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
️ Deep Thrill AI Bot
@deeperthrill bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deeperthrill's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deeperthrill's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">invisible college dropout 🤖 AI Bot </div>
<div style="font-size: 15px">@deepfates bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deepfates's tweets](https://twitter.com/deepfates).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 230 |
| Short tweets | 647 |
| Tweets kept | 2369 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jkblzyg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepfates's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lia0i7h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lia0i7h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepfates')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deepfates/1617047160223/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepfates
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
invisible college dropout AI Bot
@deepfates bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deepfates's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepfates's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1383905819217911808/AIWNRt5y_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421970043269693450/kDxxMQub_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & dodo82.jp & TSM FTX Leffen</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-dodo82j-tsm_leffen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot & dodo82.jp & TSM FTX Leffen.
| Data | Deep Leffen Bot | dodo82.jp | TSM FTX Leffen |
| --- | --- | --- | --- |
| Tweets downloaded | 505 | 220 | 3249 |
| Retweets | 13 | 32 | 368 |
| Short tweets | 26 | 26 | 142 |
| Tweets kept | 466 | 162 | 2739 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/102ri7zl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-dodo82j-tsm_leffen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2m0x83ro) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2m0x83ro/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-dodo82j-tsm_leffen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deepleffen-dodo82j-tsm_leffen/1628743643357/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepleffen-dodo82j-tsm_leffen
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Deep Leffen Bot & URL & TSM FTX Leffen
@deepleffen-dodo82j-tsm\_leffen
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Deep Leffen Bot & URL & TSM FTX Leffen.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepleffen-dodo82j-tsm\_leffen's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1383905819217911808/AIWNRt5y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & dodo82.jp</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-dodo82j</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot & dodo82.jp.
| Data | Deep Leffen Bot | dodo82.jp |
| --- | --- | --- |
| Tweets downloaded | 505 | 220 |
| Retweets | 13 | 32 |
| Short tweets | 26 | 26 |
| Tweets kept | 466 | 162 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/q5s6fe5u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-dodo82j's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gbchfdc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gbchfdc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-dodo82j')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deepleffen-dodo82j/1628739655504/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepleffen-dodo82j
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Deep Leffen Bot & URL
@deepleffen-dodo82j
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Deep Leffen Bot & URL.
Data: Tweets downloaded, Deep Leffen Bot: 505, URL: 220
Data: Retweets, Deep Leffen Bot: 13, URL: 32
Data: Short tweets, Deep Leffen Bot: 26, URL: 26
Data: Tweets kept, Deep Leffen Bot: 466, URL: 162
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepleffen-dodo82j's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1241879678455078914/e2EdZIrr_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1425624685190946817/QM0oy_7p_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Deep Leffen Bot & LUMBAGO (1984)</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-dril-twomad</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Deep Leffen Bot & LUMBAGO (1984).
| Data | wint | Deep Leffen Bot | LUMBAGO (1984) |
| --- | --- | --- | --- |
| Tweets downloaded | 3203 | 505 | 3249 |
| Retweets | 459 | 13 | 61 |
| Short tweets | 311 | 26 | 1693 |
| Tweets kept | 2433 | 466 | 1495 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kvrfp57/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-dril-twomad's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6koeu45s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6koeu45s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-dril-twomad')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deepleffen-dril-twomad/1628758133714/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepleffen-dril-twomad
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
wint & Deep Leffen Bot & LUMBAGO (1984)
@deepleffen-dril-twomad
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from wint & Deep Leffen Bot & LUMBAGO (1984).
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepleffen-dril-twomad's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & wint</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot & wint.
| Data | Deep Leffen Bot | wint |
| --- | --- | --- |
| Tweets downloaded | 506 | 3209 |
| Retweets | 13 | 463 |
| Short tweets | 26 | 311 |
| Tweets kept | 467 | 2435 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29zfoi4y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fygim56) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fygim56/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deepleffen-dril/1628834161509/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepleffen-dril
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Deep Leffen Bot & wint
@deepleffen-dril
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Deep Leffen Bot & wint.
Data: Tweets downloaded, Deep Leffen Bot: 506, wint: 3209
Data: Retweets, Deep Leffen Bot: 13, wint: 463
Data: Short tweets, Deep Leffen Bot: 26, wint: 311
Data: Tweets kept, Deep Leffen Bot: 467, wint: 2435
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepleffen-dril's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1386749605216407555/QIJeyWfE_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1425624685190946817/QM0oy_7p_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & wint but Al & LUMBAGO (1984)</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-dril_gpt2-twomad</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot & wint but Al & LUMBAGO (1984).
| Data | Deep Leffen Bot | wint but Al | LUMBAGO (1984) |
| --- | --- | --- | --- |
| Tweets downloaded | 505 | 3248 | 3249 |
| Retweets | 13 | 41 | 61 |
| Short tweets | 26 | 49 | 1691 |
| Tweets kept | 466 | 3158 | 1497 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/363d721t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-dril_gpt2-twomad's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gtnp6sz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gtnp6sz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-dril_gpt2-twomad')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deepleffen-dril_gpt2-twomad/1628757298537/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepleffen-dril_gpt2-twomad
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Deep Leffen Bot & wint but Al & LUMBAGO (1984)
@deepleffen-dril\_gpt2-twomad
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Deep Leffen Bot & wint but Al & LUMBAGO (1984).
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepleffen-dril\_gpt2-twomad's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1419130410659889153/F2F8J5kC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & ابن ڪربلاء 🇮🇶🇵🇸</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-ibnalrafidayn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot & ابن ڪربلاء 🇮🇶🇵🇸.
| Data | Deep Leffen Bot | ابن ڪربلاء 🇮🇶🇵🇸 |
| --- | --- | --- |
| Tweets downloaded | 497 | 3157 |
| Retweets | 13 | 1624 |
| Short tweets | 26 | 149 |
| Tweets kept | 458 | 1384 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/nxq13p47/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-ibnalrafidayn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/40mnb9ye) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/40mnb9ye/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-ibnalrafidayn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deepleffen-ibnalrafidayn/1627628633670/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepleffen-ibnalrafidayn
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Deep Leffen Bot & ابن ڪربلاء 🇮🇶🇵🇸
@deepleffen-ibnalrafidayn
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Deep Leffen Bot & ابن ڪربلاء 🇮🇶🇵🇸.
Data: Tweets downloaded, Deep Leffen Bot: 497, ابن ڪربلاء 🇮🇶🇵🇸: 3157
Data: Retweets, Deep Leffen Bot: 13, ابن ڪربلاء 🇮🇶🇵🇸: 1624
Data: Short tweets, Deep Leffen Bot: 26, ابن ڪربلاء 🇮🇶🇵🇸: 149
Data: Tweets kept, Deep Leffen Bot: 458, ابن ڪربلاء 🇮🇶🇵🇸: 1384
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepleffen-ibnalrafidayn's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1425624685190946817/QM0oy_7p_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1104281298967904257/KuDWZQfF_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & LUMBAGO (1984) & Schlatt</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-jschlatt-twomad</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot & LUMBAGO (1984) & Schlatt.
| Data | Deep Leffen Bot | LUMBAGO (1984) | Schlatt |
| --- | --- | --- | --- |
| Tweets downloaded | 505 | 3249 | 3250 |
| Retweets | 13 | 62 | 3 |
| Short tweets | 26 | 1691 | 1236 |
| Tweets kept | 466 | 1496 | 2011 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tchb83i1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-jschlatt-twomad's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35gw3gup) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35gw3gup/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-jschlatt-twomad')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deepleffen-jschlatt-twomad/1628748624093/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepleffen-jschlatt-twomad
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Deep Leffen Bot & LUMBAGO (1984) & Schlatt
@deepleffen-jschlatt-twomad
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Deep Leffen Bot & LUMBAGO (1984) & Schlatt.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepleffen-jschlatt-twomad's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot</div>
<div style="text-align: center; font-size: 14px;">@deepleffen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot.
| Data | Deep Leffen Bot |
| --- | --- |
| Tweets downloaded | 589 |
| Retweets | 14 |
| Short tweets | 27 |
| Tweets kept | 548 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1p32tock/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/imjjixah) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/imjjixah/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/deepleffen/1654277690184/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deepleffen
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Deep Leffen Bot
@deepleffen
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Deep Leffen Bot.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deepleffen's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ryan 🤖 AI Bot </div>
<div style="font-size: 15px">@defnotreal_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@defnotreal_'s tweets](https://twitter.com/defnotreal_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2920 |
| Retweets | 2437 |
| Short tweets | 113 |
| Tweets kept | 370 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lvnbvmn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @defnotreal_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3uffbfpz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3uffbfpz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/defnotreal_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/defnotreal_/1616212090089/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/defnotreal_
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Ryan AI Bot
@defnotreal\_ bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @defnotreal\_'s tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @defnotreal\_'s tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/58546628/goat22_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/726824334002638848/BEZFr1k8_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & deg & Fred Delicious</div>
<div style="text-align: center; font-size: 14px;">@degg-dril-fred_delicious</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & deg & Fred Delicious.
| Data | wint | deg | Fred Delicious |
| --- | --- | --- | --- |
| Tweets downloaded | 3227 | 3152 | 3235 |
| Retweets | 473 | 142 | 429 |
| Short tweets | 318 | 42 | 398 |
| Tweets kept | 2436 | 2968 | 2408 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mwoed1f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @degg-dril-fred_delicious's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a691ucn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a691ucn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/degg-dril-fred_delicious')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/degg-dril-fred_delicious/1634845142916/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/degg-dril-fred_delicious
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
AI CYBORG
wint & deg & Fred Delicious
@degg-dril-fred\_delicious
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from wint & deg & Fred Delicious.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @degg-dril-fred\_delicious's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Degrassi No Context 🤖 AI Bot </div>
<div style="font-size: 15px">@degrassinocontx bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@degrassinocontx's tweets](https://twitter.com/degrassinocontx).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 54 |
| Short tweets | 1504 |
| Tweets kept | 1687 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mu201mzi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @degrassinocontx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wxznhll) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wxznhll/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/degrassinocontx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/degrassinocontx/1614122429501/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/degrassinocontx
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Degrassi No Context AI Bot
@degrassinocontx bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @degrassinocontx's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @degrassinocontx's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Just the Tip 🤖 AI Bot </div>
<div style="font-size: 15px">@deityofyoutube bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deityofyoutube's tweets](https://twitter.com/deityofyoutube).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1518 |
| Retweets | 58 |
| Short tweets | 47 |
| Tweets kept | 1413 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3o3puxa8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deityofyoutube's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ou2v7h4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ou2v7h4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deityofyoutube')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deityofyoutube/1620252403590/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deityofyoutube
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Just the Tip AI Bot
@deityofyoutube bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deityofyoutube's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deityofyoutube's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">evelyn 🤖 AI Bot </div>
<div style="font-size: 15px">@deleteevelyn bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deleteevelyn's tweets](https://twitter.com/deleteevelyn).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3196 |
| Retweets | 277 |
| Short tweets | 358 |
| Tweets kept | 2561 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cgctgf3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deleteevelyn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/inf2farl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/inf2farl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deleteevelyn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deleteevelyn/1614106800270/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deleteevelyn
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
evelyn AI Bot
@deleteevelyn bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deleteevelyn's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deleteevelyn's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/2300677070/886b0055ea8f8e5ba55a58f8ea82dac8_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Delicious Tacos 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@delicious_tacos bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@delicious_tacos's tweets](https://twitter.com/delicious_tacos).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3214</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>817</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>578</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1819</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1573t9o6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @delicious_tacos's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/39svipdd) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/39svipdd/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/delicious_tacos'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/delicious_tacos
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Delicious Tacos AI Bot </div>
<div style="font-size: 15px; color: #657786">@delicious_tacos bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @delicious_tacos's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3214</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>817</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>578</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1819</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @delicious_tacos's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/delicious_tacos'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🍕 Deliveroo France</div>
<div style="text-align: center; font-size: 14px;">@deliveroo_fr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🍕 Deliveroo France.
| Data | 🍕 Deliveroo France |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 5 |
| Short tweets | 209 |
| Tweets kept | 3036 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/35lhnvsx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deliveroo_fr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3544md47) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3544md47/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deliveroo_fr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/deliveroo_fr/1639165126235/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deliveroo_fr
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Deliveroo France
@deliveroo\_fr
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Deliveroo France.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deliveroo\_fr's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dace🐛🏠 🤖 AI Bot </div>
<div style="font-size: 15px">@deliverydace bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deliverydace's tweets](https://twitter.com/deliverydace).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2003 |
| Retweets | 169 |
| Short tweets | 329 |
| Tweets kept | 1505 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1826o3k3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deliverydace's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24r42zx0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24r42zx0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deliverydace')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deliverydace/1613630846956/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deliverydace
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Dace AI Bot
@deliverydace bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deliverydace's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deliverydace's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">OptionsWolf</div>
<div style="text-align: center; font-size: 14px;">@deltagammaqueen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from OptionsWolf.
| Data | OptionsWolf |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 264 |
| Short tweets | 1164 |
| Tweets kept | 1817 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tt0l3wo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deltagammaqueen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/unz6kk43) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/unz6kk43/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deltagammaqueen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deltagammaqueen/1628736507176/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deltagammaqueen
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
OptionsWolf
@deltagammaqueen
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from OptionsWolf.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deltagammaqueen's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">rj bday (season) 🦜🍓💝 🤖 AI Bot </div>
<div style="font-size: 15px">@demirenjun bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@demirenjun's tweets](https://twitter.com/demirenjun).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3199 |
| Retweets | 800 |
| Short tweets | 384 |
| Tweets kept | 2015 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bdlmgyb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @demirenjun's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ck8cxvw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ck8cxvw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/demirenjun')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/demirenjun/1617917661023/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/demirenjun
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
rj bday (season) AI Bot
@demirenjun bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @demirenjun's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @demirenjun's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dení has returned. 🤖 AI Bot </div>
<div style="font-size: 15px">@deni_is_aflor bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deni_is_aflor's tweets](https://twitter.com/deni_is_aflor).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3196 |
| Retweets | 1101 |
| Short tweets | 195 |
| Tweets kept | 1900 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22jo6jl8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deni_is_aflor's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/l4we4gl2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/l4we4gl2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deni_is_aflor')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deni_is_aflor/1617777629095/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deni_is_aflor
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Dení has returned. AI Bot
@deni\_is\_aflor bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deni\_is\_aflor's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deni\_is\_aflor's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Den</div>
<div style="text-align: center; font-size: 14px;">@denyah_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Den.
| Data | Den |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 464 |
| Short tweets | 795 |
| Tweets kept | 1985 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3e5c08gr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @denyah_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1438ocp8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1438ocp8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/denyah_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/denyah_/1643852632266/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/denyah_
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Den
@denyah\_
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Den.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @denyah\_'s tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">pete wolfendale 🤖 AI Bot </div>
<div style="font-size: 15px">@deontologistics bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deontologistics's tweets](https://twitter.com/deontologistics).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 590 |
| Short tweets | 187 |
| Tweets kept | 2453 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ahwv4uv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deontologistics's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dpgq6x6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dpgq6x6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deontologistics')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deontologistics/1616689045190/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deontologistics
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
pete wolfendale AI Bot
@deontologistics bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deontologistics's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deontologistics's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">yo sophist</div>
<div style="text-align: center; font-size: 14px;">@deptofsophistry</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from yo sophist.
| Data | yo sophist |
| --- | --- |
| Tweets downloaded | 3215 |
| Retweets | 327 |
| Short tweets | 762 |
| Tweets kept | 2126 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3p698zbi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deptofsophistry's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nt0sevr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nt0sevr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deptofsophistry')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deptofsophistry/1621365721868/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deptofsophistry
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
yo sophist
@deptofsophistry
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from yo sophist.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deptofsophistry's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DER SPIEGEL</div>
<div style="text-align: center; font-size: 14px;">@derspiegel</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DER SPIEGEL.
| Data | DER SPIEGEL |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 478 |
| Short tweets | 6 |
| Tweets kept | 2766 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2uv8zr0k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @derspiegel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i3q4xu9o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i3q4xu9o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/derspiegel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/derspiegel/1638461583796/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/derspiegel
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
DER SPIEGEL
@derspiegel
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from DER SPIEGEL.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @derspiegel's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dev, Bride of Kripkenstein</div>
<div style="text-align: center; font-size: 14px;">@dervine7</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dev, Bride of Kripkenstein.
| Data | Dev, Bride of Kripkenstein |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 177 |
| Short tweets | 272 |
| Tweets kept | 2788 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2j2ia8ja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dervine7's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/287itbe2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/287itbe2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dervine7')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dervine7/1633413178103/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dervine7
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Dev, Bride of Kripkenstein
@dervine7
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Dev, Bride of Kripkenstein.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dervine7's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">nathan, stuck on magic mountain 🤖 AI Bot </div>
<div style="font-size: 15px">@derweise91 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@derweise91's tweets](https://twitter.com/derweise91).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 468 |
| Short tweets | 408 |
| Tweets kept | 2357 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2cgk3c79/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @derweise91's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/yxo7yhz5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/yxo7yhz5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/derweise91')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/derweise91/1616691639404/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/derweise91
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
nathan, stuck on magic mountain AI Bot
@derweise91 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @derweise91's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @derweise91's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">♦️Moira Perfected♦️ 🤖 AI Bot </div>
<div style="font-size: 15px">@destiny_thememe bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@destiny_thememe's tweets](https://twitter.com/destiny_thememe).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 186 |
| Short tweets | 772 |
| Tweets kept | 2284 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bbkix40/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @destiny_thememe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20xpitr1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20xpitr1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/destiny_thememe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/destiny_thememe/1616803427645/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/destiny_thememe
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
️Moira Perfected️ AI Bot
@destiny\_thememe bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @destiny\_thememe's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @destiny\_thememe's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1554982611/Nolan_Finley1_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/667024309995626496/OmzBnHNF_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Detroit News Opinion & Nolan Finley & Ingrid Jacques</div>
<div style="text-align: center; font-size: 14px;">@detnewsopinion-ingrid_jacques-nolanfinleydn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Detroit News Opinion & Nolan Finley & Ingrid Jacques.
| Data | Detroit News Opinion | Nolan Finley | Ingrid Jacques |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3249 | 3248 |
| Retweets | 530 | 1833 | 1324 |
| Short tweets | 0 | 49 | 45 |
| Tweets kept | 2720 | 1367 | 1879 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ktqwqx5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @detnewsopinion-ingrid_jacques-nolanfinleydn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vu0trurc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vu0trurc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/detnewsopinion-ingrid_jacques-nolanfinleydn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/detnewsopinion-ingrid_jacques-nolanfinleydn/1639365489716/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/detnewsopinion-ingrid_jacques-nolanfinleydn
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Detroit News Opinion & Nolan Finley & Ingrid Jacques
@detnewsopinion-ingrid\_jacques-nolanfinleydn
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Detroit News Opinion & Nolan Finley & Ingrid Jacques.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @detnewsopinion-ingrid\_jacques-nolanfinleydn's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Detroit News Opinion</div>
<div style="text-align: center; font-size: 14px;">@detnewsopinion</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Detroit News Opinion.
| Data | Detroit News Opinion |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 527 |
| Short tweets | 0 |
| Tweets kept | 2723 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gpe3yyem/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @detnewsopinion's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zezrwsaf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zezrwsaf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/detnewsopinion')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/detnewsopinion/1639432824211/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/detnewsopinion
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Detroit News Opinion
@detnewsopinion
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Detroit News Opinion.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @detnewsopinion's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1426046688263692288/RzlZFjIP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1312018147822759937/Z7XnZkhn_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sad rico & follow me only if you're sad & ...</div>
<div style="text-align: center; font-size: 14px;">@detseretninu-dumbricardo-illuminusnumb</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sad rico & follow me only if you're sad & ....
| Data | sad rico | follow me only if you're sad | ... |
| --- | --- | --- | --- |
| Tweets downloaded | 768 | 3233 | 677 |
| Retweets | 0 | 167 | 1 |
| Short tweets | 102 | 755 | 285 |
| Tweets kept | 666 | 2311 | 391 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/l42hthlz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @detseretninu-dumbricardo-illuminusnumb's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/c1hyp8lf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/c1hyp8lf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/detseretninu-dumbricardo-illuminusnumb')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/detseretninu-dumbricardo-illuminusnumb/1629841756956/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/detseretninu-dumbricardo-illuminusnumb
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
sad rico & follow me only if you're sad & ...
@detseretninu-dumbricardo-illuminusnumb
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from sad rico & follow me only if you're sad & ....
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @detseretninu-dumbricardo-illuminusnumb's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Riley 🇺🇸 🤖 AI Bot </div>
<div style="font-size: 15px">@deusdairyland bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deusdairyland's tweets](https://twitter.com/deusdairyland).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 908 |
| Retweets | 136 |
| Short tweets | 219 |
| Tweets kept | 553 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e8tma1u2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deusdairyland's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/146925y8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/146925y8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deusdairyland')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/deusdairyland/1616653373811/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/deusdairyland
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Riley 🇺🇸 AI Bot
@deusdairyland bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @deusdairyland's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @deusdairyland's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1288646558364372994/jgsTkFCl_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">koob85 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@devkoob bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@devkoob's tweets](https://twitter.com/devkoob).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>712</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>27</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>191</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>494</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qdwtu190/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @devkoob's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nweh9viw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nweh9viw/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/devkoob'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/devkoob/1609552229453/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/devkoob
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">koob85 AI Bot </div>
<div style="font-size: 15px; color: #657786">@devkoob bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @devkoob's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>712</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>27</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>191</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>494</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @devkoob's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/devkoob'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">𝐃𝐄𝐕𝐎𝐍 🤖 AI Bot </div>
<div style="font-size: 15px">@devon_onearth bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@devon_onearth's tweets](https://twitter.com/devon_onearth).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3227 |
| Retweets | 449 |
| Short tweets | 358 |
| Tweets kept | 2420 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ilmmvbmb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @devon_onearth's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ryyr6zq5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ryyr6zq5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/devon_onearth')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/devon_onearth/1614135166237/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/devon_onearth
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
𝐃𝐄𝐕𝐎𝐍 AI Bot
@devon\_onearth bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @devon\_onearth's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @devon\_onearth's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/748969887146471424/4BmVTQAv_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/74188698/NeilTysonOriginsA-Crop_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nigel Thurlow & Ernest Wright, Ph. D. ABD & Neil deGrasse Tyson</div>
<div style="text-align: center; font-size: 14px;">@devops_guru-neiltyson-nigelthurlow</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nigel Thurlow & Ernest Wright, Ph. D. ABD & Neil deGrasse Tyson.
| Data | Nigel Thurlow | Ernest Wright, Ph. D. ABD | Neil deGrasse Tyson |
| --- | --- | --- | --- |
| Tweets downloaded | 1264 | 1933 | 3250 |
| Retweets | 648 | 20 | 10 |
| Short tweets | 27 | 105 | 79 |
| Tweets kept | 589 | 1808 | 3161 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jc9vah1k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @devops_guru-neiltyson-nigelthurlow's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2myicem9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2myicem9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/devops_guru-neiltyson-nigelthurlow')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/devops_guru-neiltyson-nigelthurlow/1626908139492/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/devops_guru-neiltyson-nigelthurlow
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Nigel Thurlow & Ernest Wright, Ph. D. ABD & Neil deGrasse Tyson
@devops\_guru-neiltyson-nigelthurlow
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Nigel Thurlow & Ernest Wright, Ph. D. ABD & Neil deGrasse Tyson.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @devops\_guru-neiltyson-nigelthurlow's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">an actual dog 🤖 AI Bot </div>
<div style="font-size: 15px">@devtesla bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@devtesla's tweets](https://twitter.com/devtesla).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3164 |
| Retweets | 1246 |
| Short tweets | 222 |
| Tweets kept | 1696 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mgmikdu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @devtesla's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8vrkz503) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8vrkz503/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/devtesla')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/devtesla/1614137580281/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/devtesla
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
an actual dog AI Bot
@devtesla bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @devtesla's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @devtesla's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Devtrospective 🤖 AI Bot </div>
<div style="font-size: 15px">@devtrospective bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@devtrospective's tweets](https://twitter.com/devtrospective).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 562 |
| Short tweets | 414 |
| Tweets kept | 2263 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fwfr76h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @devtrospective's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3moy4evm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3moy4evm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/devtrospective')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/devtrospective/1617905426485/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/devtrospective
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Devtrospective AI Bot
@devtrospective bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @devtrospective's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @devtrospective's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">dGc 🤖 AI Bot </div>
<div style="font-size: 15px">@dgcyt_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dgcyt_'s tweets](https://twitter.com/dgcyt_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 907 |
| Retweets | 31 |
| Short tweets | 353 |
| Tweets kept | 523 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2172uj60/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dgcyt_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3goormmx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3goormmx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dgcyt_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dgcyt_/1619447035696/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dgcyt_
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
dGc AI Bot
@dgcyt\_ bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dgcyt\_'s tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dgcyt\_'s tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/870004265170948097/5tyWgIkd_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Damien Henry 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@dh7net bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dh7net's tweets](https://twitter.com/dh7net).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2463</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>857</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>227</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1379</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/26i29me7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dh7net's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/6w8xyhch) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/6w8xyhch/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dh7net'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dh7net/1602195754110/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dh7net
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Damien Henry AI Bot </div>
<div style="font-size: 15px; color: #657786">@dh7net bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @dh7net's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2463</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>857</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>227</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1379</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @dh7net's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dh7net'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/911053058104221696/ERPL-sS4_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dharmesh Kakadia 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@dharmeshkakadia bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dharmeshkakadia's tweets](https://twitter.com/dharmeshkakadia).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3231</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1284</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>505</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1442</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/igebzms3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dharmeshkakadia's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2rjrmg20) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2rjrmg20/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dharmeshkakadia'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dharmeshkakadia/1602267558589/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dharmeshkakadia
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dharmesh Kakadia AI Bot </div>
<div style="font-size: 15px; color: #657786">@dharmeshkakadia bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @dharmeshkakadia's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3231</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1284</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>505</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1442</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @dharmeshkakadia's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dharmeshkakadia'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1245735185787822080/riKefvZr_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Carlos 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@diaz_de_leon bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@diaz_de_leon's tweets](https://twitter.com/diaz_de_leon).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>718</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>167</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>66</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>485</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/w8v47wri/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @diaz_de_leon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/17bl278f) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/17bl278f/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/diaz_de_leon'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/diaz_de_leon/1603509315873/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/diaz_de_leon
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Carlos AI Bot </div>
<div style="font-size: 15px; color: #657786">@diaz_de_leon bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @diaz_de_leon's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>718</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>167</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>66</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>485</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @diaz_de_leon's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/diaz_de_leon'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">💾 🤖 AI Bot </div>
<div style="font-size: 15px">@digital_languor bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@digital_languor's tweets](https://twitter.com/digital_languor).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 1037 |
| Short tweets | 589 |
| Tweets kept | 1574 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z1me0hi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @digital_languor's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uffw47ml) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uffw47ml/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/digital_languor')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/digital_languor
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI Bot
@digital\_languor bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @digital\_languor's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @digital\_languor's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">artchick.eth 🔥</div>
<div style="text-align: center; font-size: 14px;">@digitalartchick</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from artchick.eth 🔥.
| Data | artchick.eth 🔥 |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 173 |
| Short tweets | 580 |
| Tweets kept | 2495 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3m0unu0z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @digitalartchick's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3q41dpi6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3q41dpi6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/digitalartchick')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/digitalartchick/1622110282921/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/digitalartchick
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
URL
@digitalartchick
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from URL .
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @digitalartchick's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Digitalsolver 🤖 AI Bot </div>
<div style="font-size: 15px">@digitalsolver1 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@digitalsolver1's tweets](https://twitter.com/digitalsolver1).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 291 |
| Retweets | 112 |
| Short tweets | 70 |
| Tweets kept | 109 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23z4oayh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @digitalsolver1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/237vwzkl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/237vwzkl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/digitalsolver1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/digitalsolver1/1616653735166/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/digitalsolver1
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Digitalsolver AI Bot
@digitalsolver1 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @digitalsolver1's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @digitalsolver1's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">george w kush 🤖 AI Bot </div>
<div style="font-size: 15px">@digitalsoyboy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@digitalsoyboy's tweets](https://twitter.com/digitalsoyboy).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3170 |
| Retweets | 462 |
| Short tweets | 369 |
| Tweets kept | 2339 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26qiav6i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @digitalsoyboy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b4m8rf4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b4m8rf4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/digitalsoyboy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/digitalsoyboy/1617805776990/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/digitalsoyboy
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
george w kush AI Bot
@digitalsoyboy bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @digitalsoyboy's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @digitalsoyboy's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jess O'Brien 🤖 AI Bot </div>
<div style="font-size: 15px">@disabledjess bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@disabledjess's tweets](https://twitter.com/disabledjess).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 713 |
| Retweets | 324 |
| Short tweets | 34 |
| Tweets kept | 355 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dt08vg5c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @disabledjess's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zxrg63ip) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zxrg63ip/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/disabledjess')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/disabledjess/1616670355194/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/disabledjess
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jess O'Brien AI Bot
@disabledjess bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @disabledjess's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @disabledjess's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">luna 🤖 AI Bot </div>
<div style="font-size: 15px">@discarddiscord bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@discarddiscord's tweets](https://twitter.com/discarddiscord).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1495 |
| Retweets | 289 |
| Short tweets | 213 |
| Tweets kept | 993 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tvxkurq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discarddiscord's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g2xt22m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g2xt22m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/discarddiscord')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/discarddiscord/1614246710317/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/discarddiscord
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
luna AI Bot
@discarddiscord bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @discarddiscord's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @discarddiscord's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">andrew blinn 🤖 AI Bot </div>
<div style="font-size: 15px">@disconcision bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@disconcision's tweets](https://twitter.com/disconcision).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2262 |
| Retweets | 453 |
| Short tweets | 159 |
| Tweets kept | 1650 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/n4jrdsqh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @disconcision's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2f36wyoh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2f36wyoh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/disconcision')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/disconcision/1616643733458/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/disconcision
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
andrew blinn AI Bot
@disconcision bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @disconcision's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @disconcision's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/980964012170121217/U6FjPH4H_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LIAM & wint & Picasso</div>
<div style="text-align: center; font-size: 14px;">@discountpicasso-dril-liam_100000</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LIAM & wint & Picasso.
| Data | LIAM | wint | Picasso |
| --- | --- | --- | --- |
| Tweets downloaded | 1962 | 3226 | 3216 |
| Retweets | 135 | 472 | 427 |
| Short tweets | 435 | 313 | 421 |
| Tweets kept | 1392 | 2441 | 2368 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w4ekve8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discountpicasso-dril-liam_100000's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/discountpicasso-dril-liam_100000')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/discountpicasso-dril-liam_100000/1630973640579/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/discountpicasso-dril-liam_100000
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
LIAM & wint & Picasso
@discountpicasso-dril-liam\_100000
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from LIAM & wint & Picasso.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @discountpicasso-dril-liam\_100000's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">malignant tzara 🤖 AI Bot </div>
<div style="font-size: 15px">@divorceenforcer bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@divorceenforcer's tweets](https://twitter.com/divorceenforcer).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3148 |
| Retweets | 1127 |
| Short tweets | 574 |
| Tweets kept | 1447 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2b3i6627/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @divorceenforcer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13x1aewb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13x1aewb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/divorceenforcer')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/divorceenforcer/1614097005501/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/divorceenforcer
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
malignant tzara AI Bot
@divorceenforcer bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @divorceenforcer's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @divorceenforcer's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">memphis milano enthusiast 🤖 AI Bot </div>
<div style="font-size: 15px">@dkulchar bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dkulchar's tweets](https://twitter.com/dkulchar).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3236 |
| Retweets | 551 |
| Short tweets | 569 |
| Tweets kept | 2116 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ar0h0xc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dkulchar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/v3hyz25i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/v3hyz25i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dkulchar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dkulchar
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
memphis milano enthusiast AI Bot
@dkulchar bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dkulchar's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dkulchar's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pirate Queen Grey</div>
<div style="text-align: center; font-size: 14px;">@dndomme</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pirate Queen Grey.
| Data | Pirate Queen Grey |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 1329 |
| Short tweets | 288 |
| Tweets kept | 1601 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ucgtv6r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dndomme's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sej7nbm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sej7nbm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dndomme')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dndomme/1632870893354/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dndomme
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Pirate Queen Grey
@dndomme
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Pirate Queen Grey.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dndomme's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1193922190955167744/kFfjfcL4_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Matthias Dobbelaere-Welvaert 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@dobbelaerew bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dobbelaerew's tweets](https://twitter.com/dobbelaerew).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3208</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>344</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>350</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2514</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/30tqyeyq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dobbelaerew's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2rthb7d0) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2rthb7d0/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dobbelaerew'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dobbelaerew
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Matthias Dobbelaere-Welvaert AI Bot </div>
<div style="font-size: 15px; color: #657786">@dobbelaerew bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @dobbelaerew's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3208</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>344</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>350</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2514</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @dobbelaerew's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dobbelaerew'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Tim Houk 🤖 AI Bot </div>
<div style="font-size: 15px">@dochouk bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dochouk's tweets](https://twitter.com/dochouk).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 441 |
| Retweets | 21 |
| Short tweets | 11 |
| Tweets kept | 409 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23t4thvg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dochouk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/218a58qu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/218a58qu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dochouk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dochouk/1616781012029/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dochouk
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Tim Houk AI Bot
@dochouk bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dochouk's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dochouk's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1250027548785938432/KHyOaVQY_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Emmet Burke 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@doctor_emmet bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@doctor_emmet's tweets](https://twitter.com/doctor_emmet).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2496</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>204</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>176</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2116</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/duj1xqx6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @doctor_emmet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/yzdnl9ld) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/yzdnl9ld/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/doctor_emmet'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/doctor_emmet/1603833315216/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/doctor_emmet
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Emmet Burke AI Bot </div>
<div style="font-size: 15px; color: #657786">@doctor_emmet bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @doctor_emmet's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2496</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>204</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>176</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2116</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @doctor_emmet's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/doctor_emmet'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">dodo82.jp</div>
<div style="text-align: center; font-size: 14px;">@dodo82j</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from dodo82.jp.
| Data | dodo82.jp |
| --- | --- |
| Tweets downloaded | 217 |
| Retweets | 31 |
| Short tweets | 26 |
| Tweets kept | 160 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2k4cbj1t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dodo82j's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qiazp47) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qiazp47/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dodo82j')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dodo82j/1628669484939/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dodo82j
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
URL
@dodo82j
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from URL.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dodo82j's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1046968391389589507/_0r5bQLl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Thoughts of Dog®</div>
<div style="text-align: center; font-size: 14px;">@dog_feelings-elonmusk</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Thoughts of Dog®.
| Data | Elon Musk | Thoughts of Dog® |
| --- | --- | --- |
| Tweets downloaded | 400 | 1148 |
| Retweets | 32 | 14 |
| Short tweets | 123 | 17 |
| Tweets kept | 245 | 1117 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2vw0f8wk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dog_feelings-elonmusk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2o3nweey) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2o3nweey/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dog_feelings-elonmusk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dog_feelings-elonmusk
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Elon Musk & Thoughts of Dog®
@dog\_feelings-elonmusk
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Elon Musk & Thoughts of Dog®.
Data: Tweets downloaded, Elon Musk: 400, Thoughts of Dog®: 1148
Data: Retweets, Elon Musk: 32, Thoughts of Dog®: 14
Data: Short tweets, Elon Musk: 123, Thoughts of Dog®: 17
Data: Tweets kept, Elon Musk: 245, Thoughts of Dog®: 1117
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dog\_feelings-elonmusk's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Thoughts of Dog®</div>
<div style="text-align: center; font-size: 14px;">@dog_feelings</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Thoughts of Dog®.
| Data | Thoughts of Dog® |
| --- | --- |
| Tweets downloaded | 1213 |
| Retweets | 23 |
| Short tweets | 21 |
| Tweets kept | 1169 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1u7m68sz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dog_feelings's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lt738fgm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lt738fgm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dog_feelings')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/dog_feelings/1668288769350/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dog_feelings
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Thoughts of Dog®
@dog\_feelings
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Thoughts of Dog®.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dog\_feelings's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">🥞 🤖 AI Bot </div>
<div style="font-size: 15px">@dogdick420cum bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dogdick420cum's tweets](https://twitter.com/dogdick420cum).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 148 |
| Short tweets | 512 |
| Tweets kept | 2582 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/nzltah4f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dogdick420cum's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29pe6wy0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29pe6wy0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dogdick420cum')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dogdick420cum/1615429013878/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dogdick420cum
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI Bot
@dogdick420cum bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dogdick420cum's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dogdick420cum's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">doge (likes democracy) 🌐 🤖 AI Bot </div>
<div style="font-size: 15px">@dogepod_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dogepod_'s tweets](https://twitter.com/dogepod_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 584 |
| Short tweets | 525 |
| Tweets kept | 2128 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/316ieof3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dogepod_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fl8hjof) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fl8hjof/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dogepod_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dogepod_/1617166176912/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dogepod_
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
doge (likes democracy) AI Bot
@dogepod\_ bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dogepod\_'s tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dogepod\_'s tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Doityboy</div>
<div style="text-align: center; font-size: 14px;">@doityboy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Doityboy.
| Data | Doityboy |
| --- | --- |
| Tweets downloaded | 3180 |
| Retweets | 551 |
| Short tweets | 660 |
| Tweets kept | 1969 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17aeg3tr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @doityboy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qumubtj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qumubtj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/doityboy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/doityboy/1621603103969/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/doityboy
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Doityboy
@doityboy
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Doityboy.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @doityboy's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jean-Emmanuel De La Martinière</div>
<div style="text-align: center; font-size: 14px;">@dojacat</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jean-Emmanuel De La Martinière.
| Data | Jean-Emmanuel De La Martinière |
| --- | --- |
| Tweets downloaded | 1569 |
| Retweets | 124 |
| Short tweets | 322 |
| Tweets kept | 1123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mc5ryte/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dojacat's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3urxj6el) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3urxj6el/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dojacat')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/dojacat/1644852645931/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dojacat
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Jean-Emmanuel De La Martinière
@dojacat
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Jean-Emmanuel De La Martinière.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dojacat's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dom and Cats 😼 🤖 AI Bot </div>
<div style="font-size: 15px">@domandcats bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@domandcats's tweets](https://twitter.com/domandcats).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 69 |
| Short tweets | 452 |
| Tweets kept | 2728 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/24l3uch3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @domandcats's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nsc2js1f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nsc2js1f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/domandcats')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/domandcats/1616883428985/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/domandcats
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Dom and Cats AI Bot
@domandcats bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @domandcats's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @domandcats's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Domonic</div>
<div style="text-align: center; font-size: 14px;">@domonic_m</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Domonic.
| Data | Domonic |
| --- | --- |
| Tweets downloaded | 502 |
| Retweets | 70 |
| Short tweets | 69 |
| Tweets kept | 363 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1q7f1cu6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @domonic_m's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/no8iew6j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/no8iew6j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/domonic_m')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/domonic_m/1629517784951/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/domonic_m
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Domonic
@domonic\_m
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Domonic.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @domonic\_m's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Donald Clark 🤖 AI Bot </div>
<div style="font-size: 15px">@donaldclark bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@donaldclark's tweets](https://twitter.com/donaldclark).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 4 |
| Short tweets | 195 |
| Tweets kept | 3051 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2vaujq4r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @donaldclark's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2of8k8rc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2of8k8rc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/donaldclark')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/donaldclark/1617223633702/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/donaldclark
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Donald Clark AI Bot
@donaldclark bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @donaldclark's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @donaldclark's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Donald Hoffman 🤖 AI Bot </div>
<div style="font-size: 15px">@donalddhoffman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@donalddhoffman's tweets](https://twitter.com/donalddhoffman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 236 |
| Retweets | 11 |
| Short tweets | 45 |
| Tweets kept | 180 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wzfrcs4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @donalddhoffman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2zo2lld7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2zo2lld7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/donalddhoffman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/donalddhoffman
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Donald Hoffman AI Bot
@donalddhoffman bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @donalddhoffman's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @donalddhoffman's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Donkey Kong</div>
<div style="text-align: center; font-size: 14px;">@donkeykongape</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Donkey Kong.
| Data | Donkey Kong |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 72 |
| Short tweets | 1081 |
| Tweets kept | 2047 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pcwumgk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @donkeykongape's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/253exk8q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/253exk8q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/donkeykongape')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/donkeykongape/1625293730159/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/donkeykongape
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Donkey Kong
@donkeykongape
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Donkey Kong.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @donkeykongape's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Me on the left 🤖 AI Bot </div>
<div style="font-size: 15px">@dontgender bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dontgender's tweets](https://twitter.com/dontgender).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2340 |
| Retweets | 1023 |
| Short tweets | 311 |
| Tweets kept | 1006 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34s4a2i7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dontgender's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/sl8zueoq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/sl8zueoq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dontgender')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dontgender/1614140992709/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dontgender
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Me on the left AI Bot
@dontgender bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dontgender's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dontgender's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Don Winslow 🤖 AI Bot </div>
<div style="font-size: 15px">@donwinslow bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@donwinslow's tweets](https://twitter.com/donwinslow).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3219 |
| Retweets | 1841 |
| Short tweets | 169 |
| Tweets kept | 1209 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jonj6ue/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @donwinslow's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jogue52) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jogue52/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/donwinslow')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/donwinslow/1612878348095/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/donwinslow
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Don Winslow AI Bot
@donwinslow bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @donwinslow's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @donwinslow's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Thistle Bnuuy 🤖 AI Bot </div>
<div style="font-size: 15px">@dorkyfolf bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dorkyfolf's tweets](https://twitter.com/dorkyfolf).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2881 |
| Retweets | 1665 |
| Short tweets | 255 |
| Tweets kept | 961 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2m0yq9vg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dorkyfolf's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wv3osjp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wv3osjp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dorkyfolf')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dorkyfolf/1617804114723/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dorkyfolf
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Thistle Bnuuy AI Bot
@dorkyfolf bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dorkyfolf's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dorkyfolf's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Carlos Santana - DotCSV 🧠🤖 🤖 AI Bot </div>
<div style="font-size: 15px">@dotcsv bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dotcsv's tweets](https://twitter.com/dotcsv).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3219 |
| Retweets | 1037 |
| Short tweets | 238 |
| Tweets kept | 1944 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36v1c13g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dotcsv's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g04fco4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g04fco4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dotcsv')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/dotcsv/1619159083139/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dotcsv
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Carlos Santana - DotCSV AI Bot
@dotcsv bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dotcsv's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dotcsv's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">daniel 🤖 AI Bot </div>
<div style="font-size: 15px">@downgrad3d bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@downgrad3d's tweets](https://twitter.com/downgrad3d).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 441 |
| Retweets | 138 |
| Short tweets | 82 |
| Tweets kept | 221 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6eqzlox6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @downgrad3d's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1fsmvsit) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1fsmvsit/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/downgrad3d')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/downgrad3d/1614303163871/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/downgrad3d
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
daniel AI Bot
@downgrad3d bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @downgrad3d's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @downgrad3d's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Donovan</div>
<div style="text-align: center; font-size: 14px;">@dp_crazy_gamer</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Donovan.
| Data | Donovan |
| --- | --- |
| Tweets downloaded | 3214 |
| Retweets | 763 |
| Short tweets | 824 |
| Tweets kept | 1627 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pvd0ays/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dp_crazy_gamer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/14bwewth) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/14bwewth/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dp_crazy_gamer')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/dp_crazy_gamer/1643299090939/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dp_crazy_gamer
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Donovan
@dp\_crazy\_gamer
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Donovan.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dp\_crazy\_gamer's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">David Pakman 🤖 AI Bot </div>
<div style="font-size: 15px">@dpakman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dpakman's tweets](https://twitter.com/dpakman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 49 |
| Short tweets | 418 |
| Tweets kept | 2783 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/el9fwqxw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dpakman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2esg5gfa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2esg5gfa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dpakman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dpakman
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
David Pakman AI Bot
@dpakman bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dpakman's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dpakman's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">🏳️🌈Dragonogon🏳️⚧️🐲 🤖 AI Bot </div>
<div style="font-size: 15px">@dragonogon bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dragonogon's tweets](https://twitter.com/dragonogon).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 988 |
| Short tweets | 346 |
| Tweets kept | 1901 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/egunl2pl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dragonogon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lgtnz96) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lgtnz96/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dragonogon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dragonogon
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
️Dragonogon️️ AI Bot
@dragonogon bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @dragonogon's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dragonogon's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Drizzy</div>
<div style="text-align: center; font-size: 14px;">@drake</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Drizzy.
| Data | Drizzy |
| --- | --- |
| Tweets downloaded | 1766 |
| Retweets | 396 |
| Short tweets | 151 |
| Tweets kept | 1219 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/178j75wb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drake's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gvmezqz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gvmezqz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/drake')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/drake/1631662344811/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/drake
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
AI BOT
Drizzy
@drake
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Drizzy.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @drake's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">izzy 😼 (anti-ableism arc)</div>
<div style="text-align: center; font-size: 14px;">@drbelbel0</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from izzy 😼 (anti-ableism arc).
| Data | izzy 😼 (anti-ableism arc) |
| --- | --- |
| Tweets downloaded | 340 |
| Retweets | 174 |
| Short tweets | 57 |
| Tweets kept | 109 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y28lpi1f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drbelbel0's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/362qf1n5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/362qf1n5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/drbelbel0')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/drbelbel0/1627246944704/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/drbelbel0
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
izzy (anti-ableism arc)
@drbelbel0
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from izzy (anti-ableism arc).
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @drbelbel0's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/2684024563/9660a122cc7fa5a3d348e16614ebb7a7_400x400.jpeg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dr. Brian May 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@drbrianmay bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@drbrianmay's tweets](https://twitter.com/drbrianmay).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3232</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>448</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>60</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2724</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3ee80djp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drbrianmay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/1zzbge0u) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/1zzbge0u/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/drbrianmay'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/drbrianmay
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dr. Brian May AI Bot </div>
<div style="font-size: 15px; color: #657786">@drbrianmay bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @drbrianmay's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3232</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>448</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>60</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2724</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @drbrianmay's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/drbrianmay'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Andrew Maragni 🇺🇸</div>
<div style="text-align: center; font-size: 14px;">@drew106</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Andrew Maragni 🇺🇸.
| Data | Andrew Maragni 🇺🇸 |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 786 |
| Short tweets | 176 |
| Tweets kept | 2282 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/pfjcjeb0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drew106's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3e1rv18u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3e1rv18u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/drew106')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/drew106/1627055915329/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/drew106
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Andrew Maragni 🇺🇸
@drew106
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Andrew Maragni 🇺🇸.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @drew106's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">drewcoffman.eth 𝕚𝕤 𝕠𝕟𝕝𝕚𝕟𝕖 🟢</div>
<div style="text-align: center; font-size: 14px;">@drewcoffman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from drewcoffman.eth 𝕚𝕤 𝕠𝕟𝕝𝕚𝕟𝕖 🟢.
| Data | drewcoffman.eth 𝕚𝕤 𝕠𝕟𝕝𝕚𝕟𝕖 🟢 |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 43 |
| Short tweets | 540 |
| Tweets kept | 2667 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2kh4r1d8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drewcoffman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ln9svwl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ln9svwl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/drewcoffman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/drewcoffman/1627699166305/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/drewcoffman
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
URL 𝕚𝕤 𝕠𝕟𝕝𝕚𝕟𝕖 🟢
@drewcoffman
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from URL 𝕚𝕤 𝕠𝕟𝕝𝕚𝕟𝕖 🟢.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @drewcoffman's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468502340634296326/gbl8-ltv_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374924360780242944/-Q8NfgEr_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Jril & wintbot_neo</div>
<div style="text-align: center; font-size: 14px;">@dril-drilbot_neo-jril_bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Jril & wintbot_neo.
| Data | wint | Jril | wintbot_neo |
| --- | --- | --- | --- |
| Tweets downloaded | 3228 | 113 | 3241 |
| Retweets | 475 | 0 | 315 |
| Short tweets | 305 | 0 | 453 |
| Tweets kept | 2448 | 113 | 2473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27nmrlyy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-drilbot_neo-jril_bot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i64hq9wb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i64hq9wb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-drilbot_neo-jril_bot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/dril-drilbot_neo-jril_bot/1643968320729/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/dril-drilbot_neo-jril_bot
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
wint & Jril & wintbot\_neo
@dril-drilbot\_neo-jril\_bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from wint & Jril & wintbot\_neo.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @dril-drilbot\_neo-jril\_bot's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
![Follow](URL
For more details, visit the project repository.
![GitHub stars](URL
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
[
54
] |
[
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
[
-0.0001764487533364445,
-0.01891571842133999,
-0.0068881697952747345,
0.01242890115827322,
0.16224369406700134,
0.04406825825572014,
0.08452208340167999,
0.14250440895557404,
-0.026455026119947433,
-0.016114573925733566,
0.17334569990634918,
0.17106501758098602,
-0.014037161134183407,
0.08718273043632507,
-0.05552244931459427,
-0.2646014094352722,
0.044212065637111664,
0.058431971818208694,
-0.020032864063978195,
0.14111687242984772,
0.0714879110455513,
-0.01828647032380104,
0.10845158249139786,
-0.02953636273741722,
-0.18877948820590973,
0.03499612212181091,
0.05605728179216385,
-0.09992336481809616,
0.11936124414205551,
0.04713086411356926,
0.08659981191158295,
0.015409729443490505,
-0.07482189685106277,
-0.12249794602394104,
0.03866785019636154,
0.041891295462846756,
-0.0629689022898674,
0.05911329388618469,
0.08818957209587097,
-0.11120674759149551,
0.1456013321876526,
0.07799911499023438,
-0.01863223686814308,
0.07941857725381851,
-0.17164963483810425,
-0.019008882343769073,
-0.036983806639909744,
0.005464354529976845,
0.057414017617702484,
0.07476010918617249,
-0.01932354085147381,
0.1732589453458786,
-0.0767555758357048,
0.09587065875530243,
0.16117197275161743,
-0.2913956344127655,
-0.005072304047644138,
0.0498935841023922,
0.06722559779882431,
0.03902119770646095,
-0.015738610178232193,
0.08706029504537582,
0.06277379393577576,
0.02536560781300068,
-0.0014978112885728478,
-0.06339331716299057,
-0.0928240567445755,
0.04014677554368973,
-0.0745372325181961,
-0.06578013300895691,
0.20811960101127625,
-0.039430033415555954,
0.050536420196294785,
-0.03807967156171799,
-0.10491003096103668,
-0.02197154052555561,
-0.015753688290715218,
0.00720712635666132,
-0.06031509116292,
0.08949250727891922,
-0.014659170061349869,
-0.07363511621952057,
-0.15025688707828522,
-0.016003666445612907,
-0.18369624018669128,
0.1574522852897644,
0.003967532888054848,
0.04864242300391197,
-0.2093716263771057,
0.11408322304487228,
0.020872395485639572,
-0.08021171391010284,
0.047296058386564255,
-0.09546594321727753,
0.07179957628250122,
0.002116252202540636,
-0.05267130210995674,
-0.02264278009533882,
0.08682441711425781,
0.15258505940437317,
-0.026140356436371803,
0.0017382961232215166,
-0.027155736461281776,
0.07059825956821442,
0.05281682312488556,
0.040018972009420395,
-0.017763059586286545,
-0.04289618879556656,
0.045782025903463364,
-0.15945611894130707,
-0.007582054473459721,
-0.06918198615312576,
-0.1068587526679039,
-0.05112413689494133,
0.022331949323415756,
0.06442617624998093,
0.031878020614385605,
0.11345649510622025,
-0.04609488695859909,
-0.014168480411171913,
0.06350603699684143,
-0.042460083961486816,
-0.0172110628336668,
-0.016047241166234016,
0.015319211408495903,
0.14095398783683777,
-0.018181998282670975,
0.030711950734257698,
-0.11251416057348251,
0.0645761713385582,
-0.09946728497743607,
-0.01914660818874836,
-0.0071241967380046844,
-0.04078202322125435,
0.029947424307465553,
-0.13505136966705322,
0.010975461453199387,
-0.1711256355047226,
-0.14874492585659027,
0.009897243231534958,
-0.02876151353120804,
-0.018498392775654793,
-0.059200938791036606,
-0.03973992541432381,
-0.01719699800014496,
0.06084148958325386,
-0.04528016224503517,
0.0009662628290243447,
-0.05951235070824623,
0.11524756252765656,
-0.05245914310216904,
0.06888316571712494,
-0.13250301778316498,
0.05976588651537895,
-0.15375985205173492,
-0.00870948750525713,
-0.04503806680440903,
0.08287355303764343,
0.017666861414909363,
0.16653785109519958,
-0.006960130762308836,
-0.012972959317266941,
-0.09847598522901535,
0.06441020965576172,
-0.023426895961165428,
0.24133971333503723,
-0.06349262595176697,
-0.143473818898201,
0.22233937680721283,
-0.06944137066602707,
-0.1420930027961731,
0.12657590210437775,
0.020838137716054916,
0.07354387640953064,
0.10204131156206131,
0.19502143561840057,
0.014321080408990383,
0.006638950202614069,
0.054760824888944626,
0.0820525735616684,
-0.17468082904815674,
-0.03090454451739788,
-0.00916894432157278,
-0.01691337674856186,
-0.1358218789100647,
0.043003637343645096,
0.11889315396547318,
0.10293904691934586,
-0.07099705934524536,
-0.013554582372307777,
-0.03317642584443092,
-0.004439891315996647,
0.06940841674804688,
-0.007896981202065945,
0.09806080162525177,
-0.09929461032152176,
-0.040301576256752014,
-0.058301206678152084,
-0.006890235934406519,
0.0024084753822535276,
0.04120192304253578,
-0.040066637098789215,
0.10096675902605057,
-0.0006905568297952414,
0.05225489288568497,
-0.14046427607536316,
-0.07798092812299728,
-0.020940499380230904,
0.1575685739517212,
0.04120592027902603,
0.04776391014456749,
0.057414863258600235,
-0.0481991246342659,
-0.0158701092004776,
-0.009968787431716919,
0.16312187910079956,
-0.0394844189286232,
-0.06968989968299866,
-0.056453973054885864,
0.10656707733869553,
-0.058365050703287125,
0.03222460299730301,
-0.04202231392264366,
0.022449776530265808,
0.060929615050554276,
0.12114907056093216,
-0.0026538248639553785,
0.029062092304229736,
-0.01023187953978777,
-0.0038288652431219816,
-0.07459788024425507,
-0.020947640761733055,
0.10031349956989288,
-0.004600553773343563,
-0.08327498286962509,
0.23685328662395477,
-0.17454755306243896,
0.19663569331169128,
0.2115958034992218,
-0.2628093659877777,
-0.024321777746081352,
-0.06840608268976212,
-0.05017746612429619,
0.003011465771123767,
0.05876095965504646,
-0.04692309722304344,
0.09809201210737228,
-0.02521132305264473,
0.16497591137886047,
-0.04652651026844978,
-0.07362692058086395,
0.016456644982099533,
-0.05898859724402428,
-0.0463954322040081,
0.0659271627664566,
0.08106416463851929,
-0.15480613708496094,
0.18694530427455902,
0.20838376879692078,
0.07612863928079605,
0.19334357976913452,
0.004058185499161482,
-0.014812501147389412,
0.08005014806985855,
-0.03805047646164894,
-0.04202093929052353,
-0.07553261518478394,
-0.16944189369678497,
-0.01902174763381481,
0.07485251128673553,
0.03750864416360855,
0.11274250596761703,
-0.10172852873802185,
-0.07372885197401047,
-0.016179129481315613,
-0.005032413639128208,
0.005167648661881685,
0.1174640879034996,
0.045775800943374634,
0.14043675363063812,
-0.019972821697592735,
0.03493902459740639,
0.08747350424528122,
0.02448674477636814,
-0.10759711265563965,
0.16407065093517303,
-0.14081640541553497,
-0.38538745045661926,
-0.16212545335292816,
-0.13394121825695038,
-0.029274288564920425,
0.04825805127620697,
0.11038947850465775,
-0.13598975539207458,
0.0011978530092164874,
-0.003706524148583412,
0.12342415004968643,
-0.0806080624461174,
0.03755999356508255,
-0.07838296890258789,
0.026997538283467293,
-0.06349453330039978,
-0.07917723804712296,
-0.036463622003793716,
-0.03232228383421898,
-0.10000553727149963,
0.1757805496454239,
-0.11054177582263947,
0.057571277022361755,
0.1741490364074707,
0.022440658882260323,
0.034390855580568314,
-0.0513761006295681,
0.17275545001029968,
-0.11779367178678513,
0.02093288116157055,
0.16278521716594696,
-0.01799617148935795,
0.08678310364484787,
0.08059167861938477,
-0.015366556122899055,
-0.10777527838945389,
0.05196633189916611,
0.0019955262541770935,
-0.1096891239285469,
-0.20052047073841095,
-0.12150565534830093,
-0.0784008651971817,
0.14483654499053955,
0.05303339660167694,
0.05915789678692818,
0.17167222499847412,
0.08591149002313614,
-0.04288473725318909,
-0.004711467772722244,
-0.012867298908531666,
0.07781979441642761,
0.1684085726737976,
-0.017248503863811493,
0.11789125204086304,
-0.05446818470954895,
-0.11601924896240234,
0.13826869428157806,
0.02504623495042324,
0.050291191786527634,
0.04182872176170349,
0.008374262601137161,
-0.009610554203391075,
0.09969738125801086,
0.12988939881324768,
0.118865467607975,
-0.008107239380478859,
-0.0232877004891634,
-0.03601100295782089,
-0.00860752072185278,
-0.03570752218365669,
0.034572016447782516,
0.011757065542042255,
-0.16013272106647491,
-0.05726486071944237,
-0.12173470109701157,
0.0964120477437973,
0.09787409752607346,
0.08039643615484238,
-0.2033146470785141,
-0.004589703865349293,
0.07378163933753967,
-0.03603411093354225,
-0.11624246090650558,
0.086527980864048,
0.033109065145254135,
-0.1271866261959076,
0.0817195475101471,
-0.03352120518684387,
0.115711510181427,
-0.017423994839191437,
0.09427224844694138,
-0.04346824064850807,
-0.0329415462911129,
-0.012381686829030514,
0.10430185496807098,
-0.30799204111099243,
0.17485815286636353,
-0.019660785794258118,
-0.07034741342067719,
-0.07672256976366043,
-0.025566134601831436,
0.017929747700691223,
0.07530300319194794,
0.09619415551424026,
0.024311896413564682,
0.04642496258020401,
-0.09243860840797424,
-0.03940937668085098,
0.034113768488168716,
0.13641610741615295,
-0.0638844221830368,
-0.015862328931689262,
-0.04075292870402336,
0.01116214320063591,
-0.019626103341579437,
-0.027355113998055458,
0.018256209790706635,
-0.1504947543144226,
0.05358212813735008,
0.017237937077879906,
0.0753018707036972,
0.03889141231775284,
-0.007973386906087399,
-0.10224062204360962,
0.18268363177776337,
-0.03004412353038788,
-0.08521177619695663,
-0.127006396651268,
-0.04812724515795708,
0.04587927460670471,
-0.051474425941705704,
0.034381575882434845,
-0.06691597402095795,
-0.011345877312123775,
-0.06886660307645798,
-0.21483656764030457,
0.12495172768831253,
-0.0775398537516594,
-0.07874035835266113,
-0.03474915772676468,
0.20981398224830627,
-0.05076101794838905,
-0.00018431349599268287,
0.01172169204801321,
0.014822970144450665,
-0.1086968258023262,
-0.10466268658638,
0.06874550879001617,
-0.034708425402641296,
0.02743770368397236,
0.02760813757777214,
-0.03700246661901474,
0.02073092758655548,
-0.06074898689985275,
-0.01314478274434805,
0.2849438786506653,
0.22848764061927795,
-0.035804633051157,
0.1875685602426529,
0.10711772739887238,
-0.07248730212450027,
-0.30828598141670227,
-0.0999293103814125,
-0.13133330643177032,
-0.033457282930612564,
-0.02052777260541916,
-0.17081500589847565,
0.07089676707983017,
0.04558560997247696,
0.00998434517532587,
0.14900504052639008,
-0.21140912175178528,
-0.08518896996974945,
0.1092359870672226,
-0.03466132655739784,
0.42140644788742065,
-0.1164151057600975,
-0.09652433544397354,
-0.05340435355901718,
-0.15239587426185608,
0.20086675882339478,
-0.01557826716452837,
0.08761543780565262,
-0.031178412958979607,
0.14360429346561432,
0.04995222017168999,
-0.01680157333612442,
0.08329997956752777,
0.0014065488940104842,
0.004026432521641254,
-0.12653036415576935,
-0.022627411410212517,
0.0504734143614769,
0.021120509132742882,
0.0054380460642278194,
-0.07736434042453766,
0.028706049546599388,
-0.14863882958889008,
-0.024921666830778122,
-0.10909338295459747,
0.08278775960206985,
0.03857516124844551,
-0.07422378659248352,
-0.010892878286540508,
-0.05615931376814842,
-0.023589344695210457,
-0.012528443709015846,
0.13453009724617004,
-0.050522636622190475,
0.1720355898141861,
0.036824267357587814,
0.11522943526506424,
-0.13816054165363312,
0.06146138161420822,
-0.07406572997570038,
-0.07532623410224915,
0.06773588061332703,
-0.13577620685100555,
0.05240122601389885,
0.10075365751981735,
-0.03372569754719734,
0.04758675396442413,
0.08927652984857559,
0.000919255951885134,
0.009403540752828121,
0.15953567624092102,
-0.2761637568473816,
0.01791755110025406,
-0.07046908140182495,
-0.07692829519510269,
0.112159863114357,
0.07484955340623856,
0.181244894862175,
0.02791808731853962,
-0.0472177192568779,
0.012221097014844418,
0.019269011914730072,
-0.05120278522372246,
0.0548672117292881,
0.006578541360795498,
-0.011816216632723808,
-0.14148055016994476,
0.08790901303291321,
-0.0015104453777894378,
-0.1429632604122162,
0.02210722118616104,
0.19665491580963135,
-0.13288317620754242,
-0.10024755448102951,
-0.05081937462091446,
0.057288672775030136,
-0.13861924409866333,
0.008680099621415138,
-0.01990172080695629,
-0.09562872350215912,
0.0756591260433197,
0.1573420912027359,
0.05079081282019615,
0.12963363528251648,
-0.02916029281914234,
-0.008411908522248268,
-0.04279141128063202,
-0.051675889641046524,
0.02788730151951313,
0.019720058888196945,
-0.07752392441034317,
0.08735781908035278,
-0.024015765637159348,
0.14329436421394348,
-0.10051412135362625,
-0.06987810134887695,
-0.1344141960144043,
-0.005019306670874357,
-0.09607285261154175,
-0.0959741547703743,
-0.08079583197832108,
-0.061935000121593475,
0.004180733114480972,
-0.039079975336790085,
-0.0417400486767292,
-0.08063078671693802,
-0.10278100520372391,
0.016344387084245682,
-0.02737213484942913,
0.028547246009111404,
-0.07021904736757278,
0.008450011722743511,
0.12212047725915909,
-0.029432062059640884,
0.17478644847869873,
0.15045183897018433,
-0.10443241149187088,
0.10585668683052063,
-0.17528948187828064,
-0.10057486593723297,
0.10386897623538971,
-0.01424362976104021,
0.02509693056344986,
0.12928596138954163,
0.018586233258247375,
0.04242280498147011,
0.03278495371341705,
0.06450606882572174,
0.04438134282827377,
-0.11879310011863708,
0.08265755325555801,
-0.002198860514909029,
-0.15595629811286926,
-0.061124756932258606,
-0.09235648810863495,
0.02707000821828842,
0.02035105973482132,
0.11072126775979996,
-0.04177020862698555,
0.08901136368513107,
-0.06589431315660477,
0.023006385192275047,
0.025188656523823738,
-0.17641755938529968,
-0.03692740947008133,
-0.05073147267103195,
0.03235582634806633,
0.028527792543172836,
0.22216679155826569,
0.015049049630761147,
-0.030550595372915268,
0.0387740358710289,
0.12439186871051788,
0.015807392075657845,
-0.0005460345419123769,
0.1709187924861908,
0.10334094613790512,
-0.07364179939031601,
-0.14011329412460327,
0.0676020160317421,
0.012262817472219467,
-0.05599943920969963,
0.11458845436573029,
-0.01812131516635418,
-0.006866521667689085,
0.0670672059059143,
-0.019556820392608643,
0.03449620306491852,
-0.06607513129711151,
-0.12762659788131714,
-0.02478908933699131,
0.04269528388977051,
0.0029638311825692654,
0.12794137001037598,
0.15664103627204895,
-0.005168106406927109,
0.026651626452803612,
-0.015845872461795807,
-0.023246217519044876,
-0.13421551883220673,
-0.15646639466285706,
-0.06222613900899887,
-0.14998294413089752,
0.022462334483861923,
-0.08663632720708847,
0.047867026180028915,
0.05302877724170685,
0.07266844809055328,
-0.06541765481233597,
0.06867647916078568,
0.049353908747434616,
-0.11939282715320587,
0.08969518542289734,
-0.024748487398028374,
0.04455447196960449,
-0.0018985075876116753,
-0.03077637031674385,
-0.10359742492437363,
0.042003192007541656,
-0.01649407483637333,
0.045346152037382126,
-0.04421991854906082,
0.022579338401556015,
-0.1750071942806244,
-0.10995419323444366,
-0.04818427562713623,
0.06838665902614594,
-0.06683412194252014,
0.04056481271982193,
0.01606888137757778,
0.011690732091665268,
0.030521972104907036,
0.22083373367786407,
-0.041100796312093735,
-0.04384056478738785,
-0.041319385170936584,
0.1635150909423828,
-0.014866671524941921,
0.08733032643795013,
-0.027159888297319412,
-0.008534550666809082,
-0.09159335494041443,
0.3551705777645111,
0.29626867175102234,
-0.08988689631223679,
0.018322573974728584,
-0.023403000086545944,
0.04054094851016998,
0.13749836385250092,
0.13930056989192963,
0.09693139791488647,
0.23870247602462769,
-0.069560706615448,
-0.05235563591122627,
-0.01810343936085701,
-0.013992605730891228,
-0.06698887050151825,
0.09740167111158371,
0.02308662235736847,
-0.05992421507835388,
-0.0399320125579834,
0.09157036244869232,
-0.2363831102848053,
0.1119081899523735,
-0.1049022525548935,
-0.15424089133739471,
-0.03413557633757591,
0.011207441799342632,
0.07758733630180359,
0.01318280678242445,
0.11218693852424622,
0.013466300442814827,
-0.0844072625041008,
0.014990749768912792,
0.034041877835989,
-0.25270384550094604,
0.004782035481184721,
0.053680628538131714,
-0.12391778826713562,
-0.004765757359564304,
-0.025234686210751534,
0.013625700026750565,
0.05700398609042168,
0.04094824194908142,
-0.03069460764527321,
0.016096249222755432,
-0.006652043666690588,
-0.020144162699580193,
-0.008288858458399773,
0.05953062325716019,
0.04713095352053642,
-0.1559005230665207,
0.06380777060985565,
-0.13577982783317566,
0.04075455665588379,
-0.023885579779744148,
-0.012463473714888096,
-0.0012933483812958002,
0.01776987873017788,
-0.0522073432803154,
0.06581555306911469,
0.07255802303552628,
-0.0144086554646492,
0.008549283258616924,
-0.08346639573574066,
-0.034440234303474426,
-0.023841137066483498,
-0.10951124131679535,
-0.08404874801635742,
-0.13158579170703888,
-0.12121031433343887,
0.10822955518960953,
-0.02819518744945526,
-0.18475614488124847,
0.03248962014913559,
-0.12301548570394516,
0.05860345810651779,
-0.17328192293643951,
0.11332228779792786,
0.07014153152704239,
0.018578365445137024,
0.01264720968902111,
-0.007008096668869257,
0.08073210716247559,
0.11507359892129898,
-0.0813264548778534,
-0.08199906349182129
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.