sha
null | last_modified
null | library_name
stringclasses 154
values | text
stringlengths 1
900k
| metadata
stringlengths 2
348k
| pipeline_tag
stringclasses 45
values | id
stringlengths 5
122
| tags
listlengths 1
1.84k
| created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
| tokens_length
listlengths 1
723
| input_texts
listlengths 1
61
| embeddings
listlengths 768
768
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
transformers
|
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1436764466868273159/z-bXRwzQ_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Scientist</div>
<div style="text-align: center; font-size: 14px;">@ihavesexhourly</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Scientist.
| Data | Scientist |
| --- | --- |
| Tweets downloaded | 3205 |
| Retweets | 841 |
| Short tweets | 621 |
| Tweets kept | 1743 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qyzrpd8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ihavesexhourly's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m2o7mtpw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m2o7mtpw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ihavesexhourly')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ihavesexhourly/1631841194880/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ihavesexhourly
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Scientist
@ihavesexhourly
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Scientist.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ihavesexhourly's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">juju 💰</div>
<div style="text-align: center; font-size: 14px;">@ihyjuju</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from juju 💰.
| Data | juju 💰 |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 1 |
| Short tweets | 478 |
| Tweets kept | 2769 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n82hqbg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ihyjuju's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1t6rclcz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1t6rclcz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ihyjuju')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/ihyjuju/1640741515385/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ihyjuju
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
juju
@ihyjuju
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from juju .
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ihyjuju's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">¯\_(ツ)_/¯</div>
<div style="text-align: center; font-size: 14px;">@ijustbluemyself</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ¯\_(ツ)_/¯.
| Data | ¯\_(ツ)_/¯ |
| --- | --- |
| Tweets downloaded | 3224 |
| Retweets | 250 |
| Short tweets | 982 |
| Tweets kept | 1992 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qgmk16ox/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ijustbluemyself's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yq2ve7k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yq2ve7k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ijustbluemyself')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ijustbluemyself/1625279746808/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ijustbluemyself
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
¯\\_(ツ)\_/¯
@ijustbluemyself
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from ¯\_(ツ)\_/¯.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ijustbluemyself's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/2429657879/vq8ux7qvn4ljg9oh7zzu_400x400.jpeg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Iván Díaz 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@ildiazm bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ildiazm's tweets](https://twitter.com/ildiazm).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>574</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>72</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>12</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>490</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3cn99ecb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ildiazm's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/167ssmah) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/167ssmah/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ildiazm'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ildiazm
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Iván Díaz AI Bot </div>
<div style="font-size: 15px; color: #657786">@ildiazm bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @ildiazm's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>574</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>72</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>12</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>490</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @ildiazm's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ildiazm'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Banon 🤖 AI Bot </div>
<div style="font-size: 15px">@ilike_birds bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ilike_birds's tweets](https://twitter.com/ilike_birds).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1017 |
| Retweets | 39 |
| Short tweets | 337 |
| Tweets kept | 641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21wt3y4x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ilike_birds's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g2q8s1w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g2q8s1w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ilike_birds')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ilike_birds/1617813434047/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ilike_birds
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Banon AI Bot
@ilike\_birds bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @ilike\_birds's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ilike\_birds's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">IllinoisJones 🤖 AI Bot </div>
<div style="font-size: 15px">@iljone bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@iljone's tweets](https://twitter.com/iljone).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 337 |
| Retweets | 6 |
| Short tweets | 99 |
| Tweets kept | 232 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3l85ym1p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iljone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3uzsj96o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3uzsj96o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iljone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/iljone/1616774453050/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/iljone
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
IllinoisJones AI Bot
@iljone bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @iljone's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @iljone's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Josh Cherry 🌱 🤖 AI Bot </div>
<div style="font-size: 15px">@ilovelucilius bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ilovelucilius's tweets](https://twitter.com/ilovelucilius).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 331 |
| Retweets | 42 |
| Short tweets | 9 |
| Tweets kept | 280 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ztd1uk0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ilovelucilius's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gbbrvx4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gbbrvx4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ilovelucilius')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ilovelucilius/1616644679483/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ilovelucilius
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Josh Cherry AI Bot
@ilovelucilius bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @ilovelucilius's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ilovelucilius's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ilya Sutskever</div>
<div style="text-align: center; font-size: 14px;">@ilyasut</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ilya Sutskever.
| Data | Ilya Sutskever |
| --- | --- |
| Tweets downloaded | 852 |
| Retweets | 474 |
| Short tweets | 39 |
| Tweets kept | 339 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y41t187f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ilyasut's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2slwglzj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2slwglzj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ilyasut')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/ilyasut/1653408370188/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ilyasut
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Ilya Sutskever
@ilyasut
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Ilya Sutskever.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ilyasut's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Brown Timothée Chalamet 🤖 AI Bot </div>
<div style="font-size: 15px">@imaginary_bi bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imaginary_bi's tweets](https://twitter.com/imaginary_bi).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1204 |
| Retweets | 189 |
| Short tweets | 72 |
| Tweets kept | 943 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3srr04nu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imaginary_bi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3773072h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3773072h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imaginary_bi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imaginary_bi/1614117239005/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imaginary_bi
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Brown Timothée Chalamet AI Bot
@imaginary\_bi bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @imaginary\_bi's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @imaginary\_bi's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ace 🤖 AI Bot </div>
<div style="font-size: 15px">@imcummingonline bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imcummingonline's tweets](https://twitter.com/imcummingonline).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 914 |
| Retweets | 88 |
| Short tweets | 218 |
| Tweets kept | 608 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2yh36yxx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imcummingonline's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nnnr0u8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nnnr0u8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imcummingonline')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imcummingonline/1617770513198/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imcummingonline
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Ace AI Bot
@imcummingonline bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @imcummingonline's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @imcummingonline's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Contra</div>
<div style="text-align: center; font-size: 14px;">@imgrimevil</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Contra.
| Data | Contra |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 669 |
| Short tweets | 582 |
| Tweets kept | 1987 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kn7qqp8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imgrimevil's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fjaoumhd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fjaoumhd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imgrimevil')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imgrimevil/1627251988335/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imgrimevil
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Contra
@imgrimevil
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Contra.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @imgrimevil's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jack Rudd 🇹🇹 🏳️⚧️</div>
<div style="text-align: center; font-size: 14px;">@imjackrudd</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jack Rudd 🇹🇹 🏳️⚧️.
| Data | Jack Rudd 🇹🇹 🏳️⚧️ |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 55 |
| Short tweets | 327 |
| Tweets kept | 2864 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g5589wt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imjackrudd's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eyywpszu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eyywpszu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imjackrudd')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imjackrudd/1632871893609/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imjackrudd
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Jack Rudd 🇹🇹 ️️
@imjackrudd
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Jack Rudd 🇹🇹 ️️.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @imjackrudd's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Luca 🤖 AI Bot </div>
<div style="font-size: 15px">@imjustluca bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imjustluca's tweets](https://twitter.com/imjustluca).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 379 |
| Short tweets | 261 |
| Tweets kept | 2578 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ap66ek7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imjustluca's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qfi3jgq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qfi3jgq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imjustluca')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imjustluca/1614160603911/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imjustluca
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Luca AI Bot
@imjustluca bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @imjustluca's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @imjustluca's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1298909619721388035/1v9WJxu7_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jaelynn 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@imjustuhgrl bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imjustuhgrl's tweets](https://twitter.com/imjustuhgrl).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3236</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>15</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>512</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2709</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/4phdk9xl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imjustuhgrl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/22432rm3) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/22432rm3/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/imjustuhgrl'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imjustuhgrl/1601318938681/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imjustuhgrl
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jaelynn AI Bot </div>
<div style="font-size: 15px; color: #657786">@imjustuhgrl bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @imjustuhgrl's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3236</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>15</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>512</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2709</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @imjustuhgrl's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/imjustuhgrl'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">gabagol cawfee 🤖 AI Bot </div>
<div style="font-size: 15px">@immarxistonline bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@immarxistonline's tweets](https://twitter.com/immarxistonline).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3226 |
| Retweets | 340 |
| Short tweets | 732 |
| Tweets kept | 2154 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f3uoi57/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @immarxistonline's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1tynoxd5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1tynoxd5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/immarxistonline')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/immarxistonline/1617769482090/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/immarxistonline
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gabagol cawfee AI Bot
@immarxistonline bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @immarxistonline's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @immarxistonline's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276507373260214275/RZ9iZEmJ_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">The Immersive Kind 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@immersivekind bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@immersivekind's tweets](https://twitter.com/immersivekind).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>435</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>171</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>4</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>260</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bh9dpmh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @immersivekind's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ovh81f8f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ovh81f8f/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/immersivekind'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/immersivekind
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">The Immersive Kind AI Bot </div>
<div style="font-size: 15px; color: #657786">@immersivekind bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @immersivekind's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>435</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>171</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>4</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>260</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @immersivekind's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/immersivekind'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">🇨🇦📎 🤖 AI Bot </div>
<div style="font-size: 15px">@imnotseto bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imnotseto's tweets](https://twitter.com/imnotseto).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 342 |
| Retweets | 15 |
| Short tweets | 50 |
| Tweets kept | 277 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33rcvwm6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imnotseto's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35wya1gp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35wya1gp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imnotseto')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imnotseto/1614213422097/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imnotseto
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
🇨🇦 AI Bot
@imnotseto bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @imnotseto's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @imnotseto's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1335360624646295552/kaAOgc0s_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">imo !!! 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@imogenloisfox bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imogenloisfox's tweets](https://twitter.com/imogenloisfox).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2473</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>883</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>219</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1371</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dm16o1m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imogenloisfox's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ectjmyn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ectjmyn/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/imogenloisfox'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imogenloisfox/1608309297782/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imogenloisfox
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">imo !!! AI Bot </div>
<div style="font-size: 15px; color: #657786">@imogenloisfox bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @imogenloisfox's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2473</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>883</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>219</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1371</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @imogenloisfox's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/imogenloisfox'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert Yi 🐳</div>
<div style="text-align: center; font-size: 14px;">@imrobertyi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert Yi 🐳.
| Data | Robert Yi 🐳 |
| --- | --- |
| Tweets downloaded | 1353 |
| Retweets | 61 |
| Short tweets | 130 |
| Tweets kept | 1162 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cmckdcz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imrobertyi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fi24mvdb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fi24mvdb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imrobertyi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imrobertyi/1631652694998/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imrobertyi
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Robert Yi
@imrobertyi
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Robert Yi .
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @imrobertyi's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">carson 🤖 AI Bot </div>
<div style="font-size: 15px">@imscribbledude bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@imscribbledude's tweets](https://twitter.com/imscribbledude).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2286 |
| Retweets | 458 |
| Short tweets | 252 |
| Tweets kept | 1576 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2eyhb2dr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imscribbledude's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1t0me7sm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1t0me7sm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imscribbledude')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/imscribbledude/1614102197502/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/imscribbledude
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
carson AI Bot
@imscribbledude bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @imscribbledude's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @imscribbledude's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Matthew Incantalupo 🤖 AI Bot </div>
<div style="font-size: 15px">@incantalupo bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@incantalupo's tweets](https://twitter.com/incantalupo).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1738 |
| Retweets | 36 |
| Short tweets | 61 |
| Tweets kept | 1641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/12pm0jbi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @incantalupo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vnxuapw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vnxuapw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/incantalupo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/incantalupo/1616711390839/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/incantalupo
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Matthew Incantalupo AI Bot
@incantalupo bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @incantalupo's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @incantalupo's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/561419401145376768/7OIwxUCC_400x400.jpeg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1190256978007904257/TsXH7_nP_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Charmeuse & Sad Socrates & Vincent Van Gone</div>
<div style="text-align: center; font-size: 14px;">@incharmuese-sadsocrates-vvangone</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Charmeuse & Sad Socrates & Vincent Van Gone.
| Data | Charmeuse | Sad Socrates | Vincent Van Gone |
| --- | --- | --- | --- |
| Tweets downloaded | 3238 | 3197 | 3233 |
| Retweets | 1165 | 40 | 1054 |
| Short tweets | 248 | 161 | 266 |
| Tweets kept | 1825 | 2996 | 1913 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13ochftk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @incharmuese-sadsocrates-vvangone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/173sb7ob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/173sb7ob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/incharmuese-sadsocrates-vvangone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/incharmuese-sadsocrates-vvangone/1635521727120/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/incharmuese-sadsocrates-vvangone
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Charmeuse & Sad Socrates & Vincent Van Gone
@incharmuese-sadsocrates-vvangone
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Charmeuse & Sad Socrates & Vincent Van Gone.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @incharmuese-sadsocrates-vvangone's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">indi 🍔 🤖 AI Bot </div>
<div style="font-size: 15px">@indiburger bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@indiburger's tweets](https://twitter.com/indiburger).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3104 |
| Retweets | 712 |
| Short tweets | 372 |
| Tweets kept | 2020 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3emok4ku/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @indiburger's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rpeuqv5y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rpeuqv5y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/indiburger')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/indiburger/1614096163881/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/indiburger
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
indi AI Bot
@indiburger bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @indiburger's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @indiburger's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Алексей Целищев 🤖 AI Bot </div>
<div style="font-size: 15px">@infernocav bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@infernocav's tweets](https://twitter.com/infernocav).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 129 |
| Retweets | 8 |
| Short tweets | 16 |
| Tweets kept | 105 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fbjwvhg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @infernocav's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cxwbz9yp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cxwbz9yp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/infernocav')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/infernocav/1616656950369/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/infernocav
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Алексей Целищев AI Bot
@infernocav bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @infernocav's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @infernocav's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">RIP TO THE VILLIAN 🤖 AI Bot </div>
<div style="font-size: 15px">@infinitedodge bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@infinitedodge's tweets](https://twitter.com/infinitedodge).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2774 |
| Retweets | 1524 |
| Short tweets | 123 |
| Tweets kept | 1127 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qerz9onf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @infinitedodge's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gw3u22x) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gw3u22x/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/infinitedodge')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/infinitedodge/1614135156383/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/infinitedodge
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
RIP TO THE VILLIAN AI Bot
@infinitedodge bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @infinitedodge's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @infinitedodge's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">infosec_dominatrix 🤖 AI Bot </div>
<div style="font-size: 15px">@infosec_domme bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@infosec_domme's tweets](https://twitter.com/infosec_domme).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 542 |
| Retweets | 64 |
| Short tweets | 57 |
| Tweets kept | 421 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1s8mwvc2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @infosec_domme's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qb5k1m0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qb5k1m0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/infosec_domme')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/infosec_domme/1616349133246/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/infosec_domme
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
infosec\_dominatrix AI Bot
@infosec\_domme bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @infosec\_domme's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @infosec\_domme's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ingrida Šimonytė</div>
<div style="text-align: center; font-size: 14px;">@ingridasimonyte</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ingrida Šimonytė.
| Data | Ingrida Šimonytė |
| --- | --- |
| Tweets downloaded | 283 |
| Retweets | 17 |
| Short tweets | 10 |
| Tweets kept | 256 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vod103u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ingridasimonyte's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xm136ry) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xm136ry/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ingridasimonyte')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ingridasimonyte/1620506733305/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ingridasimonyte
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Ingrida Šimonytė
@ingridasimonyte
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Ingrida Šimonytė.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ingridasimonyte's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">borghisattva 🤖 AI Bot </div>
<div style="font-size: 15px">@ingroupist bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ingroupist's tweets](https://twitter.com/ingroupist).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 154 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 154 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fl5icybp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ingroupist's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/218gj8om) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/218gj8om/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ingroupist')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ingroupist/1616685344882/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ingroupist
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
borghisattva AI Bot
@ingroupist bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @ingroupist's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ingroupist's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">INHALING MY SHEET OF SUN</div>
<div style="text-align: center; font-size: 14px;">@inhalingmy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from INHALING MY SHEET OF SUN.
| Data | INHALING MY SHEET OF SUN |
| --- | --- |
| Tweets downloaded | 2647 |
| Retweets | 0 |
| Short tweets | 838 |
| Tweets kept | 1809 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/r1ksmwi2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @inhalingmy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2e1lrid4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2e1lrid4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/inhalingmy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/inhalingmy/1631843035059/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/inhalingmy
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
INHALING MY SHEET OF SUN
@inhalingmy
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from INHALING MY SHEET OF SUN.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @inhalingmy's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Su Başak 🤖 AI Bot </div>
<div style="font-size: 15px">@inmidonot bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@inmidonot's tweets](https://twitter.com/inmidonot).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 344 |
| Retweets | 6 |
| Short tweets | 15 |
| Tweets kept | 323 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mghgkpx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @inmidonot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12l0mm4t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12l0mm4t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/inmidonot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/inmidonot/1616937673978/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/inmidonot
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Su Başak AI Bot
@inmidonot bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @inmidonot's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @inmidonot's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Insert 🚩🦮 🤖 AI Bot </div>
<div style="font-size: 15px">@insert_name27 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@insert_name27's tweets](https://twitter.com/insert_name27).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 111 |
| Short tweets | 491 |
| Tweets kept | 2644 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3m2d1hmb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @insert_name27's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ajldnpxe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ajldnpxe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/insert_name27')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/insert_name27/1617820538616/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/insert_name27
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Insert AI Bot
@insert\_name27 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @insert\_name27's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @insert\_name27's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1449364913890074627/SNmSlTYD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1450840619132260357/r9rdJtIp_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pratham & Insha & Savio Martin ⚡️</div>
<div style="text-align: center; font-size: 14px;">@insharamin-prathkum-saviomartin7</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pratham & Insha & Savio Martin ⚡️.
| Data | Pratham | Insha | Savio Martin ⚡️ |
| --- | --- | --- | --- |
| Tweets downloaded | 3246 | 3249 | 3249 |
| Retweets | 461 | 24 | 118 |
| Short tweets | 317 | 457 | 201 |
| Tweets kept | 2468 | 2768 | 2930 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/o7jfvmhp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @insharamin-prathkum-saviomartin7's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p2md0wva) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p2md0wva/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/insharamin-prathkum-saviomartin7')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/insharamin-prathkum-saviomartin7/1637920907734/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/insharamin-prathkum-saviomartin7
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Pratham & Insha & Savio Martin ️
@insharamin-prathkum-saviomartin7
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Pratham & Insha & Savio Martin ️.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @insharamin-prathkum-saviomartin7's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Insufficiently Outraged 🤖 AI Bot </div>
<div style="font-size: 15px">@insufficientout bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@insufficientout's tweets](https://twitter.com/insufficientout).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 784 |
| Retweets | 26 |
| Short tweets | 68 |
| Tweets kept | 690 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/5cu9fjjj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @insufficientout's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2c1v17ew) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2c1v17ew/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/insufficientout')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/insufficientout/1616757946042/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/insufficientout
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Insufficiently Outraged AI Bot
@insufficientout bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @insufficientout's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @insufficientout's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Spark Of Inquiry 🤖 AI Bot </div>
<div style="font-size: 15px">@interro__bang bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@interro__bang's tweets](https://twitter.com/interro__bang).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 114 |
| Retweets | 2 |
| Short tweets | 19 |
| Tweets kept | 93 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1k112d2n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @interro__bang's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uppi8vz0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uppi8vz0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/interro__bang')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/interro__bang/1616611219490/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/interro__bang
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Spark Of Inquiry AI Bot
@interro\_\_bang bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @interro\_\_bang's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @interro\_\_bang's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/608132742224568320/x3yrArdT_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Electronic Intifada 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@intifada bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@intifada's tweets](https://twitter.com/intifada).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3241</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>6</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3235</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1qmm4ybr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @intifada's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/8f4jzilg) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/8f4jzilg/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/intifada'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/intifada/1603110719648/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/intifada
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Electronic Intifada AI Bot </div>
<div style="font-size: 15px; color: #657786">@intifada bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @intifada's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3241</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>6</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3235</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @intifada's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/intifada'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/922432805426130944/Zv5SABlH_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Carlos E. Perez 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@intuitmachine bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@intuitmachine's tweets](https://twitter.com/intuitmachine).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3216</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>222</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>82</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2912</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3a25w014/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @intuitmachine's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/g4lfqgv1) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/g4lfqgv1/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/intuitmachine'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/intuitmachine
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Carlos E. Perez AI Bot </div>
<div style="font-size: 15px; color: #657786">@intuitmachine bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @intuitmachine's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3216</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>222</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>82</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2912</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @intuitmachine's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/intuitmachine'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1393211665001459713/gobLbDve_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Steve | Millionaire Habits & Investor's Theory</div>
<div style="text-align: center; font-size: 14px;">@investorstheory-steveonspeed</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Steve | Millionaire Habits & Investor's Theory.
| Data | Steve | Millionaire Habits | Investor's Theory |
| --- | --- | --- |
| Tweets downloaded | 3245 | 3250 |
| Retweets | 330 | 168 |
| Short tweets | 320 | 660 |
| Tweets kept | 2595 | 2422 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2yk0pwia/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @investorstheory-steveonspeed's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hmaq3cx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hmaq3cx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/investorstheory-steveonspeed')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/investorstheory-steveonspeed/1622080865723/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/investorstheory-steveonspeed
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> AI CYBORG </div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Steve | Millionaire Habits & Investor's Theory</div>
<div style="text-align: center; font-size: 14px;">@investorstheory-steveonspeed</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on tweets from Steve | Millionaire Habits & Investor's Theory.
| Data | Steve | Millionaire Habits | Investor's Theory |
| --- | --- | --- |
| Tweets downloaded | 3245 | 3250 |
| Retweets | 330 | 168 |
| Short tweets | 320 | 660 |
| Tweets kept | 2595 | 2422 |
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @investorstheory-steveonspeed's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
## Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ada IO 🤖 AI Bot </div>
<div style="font-size: 15px">@ioorbust bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ioorbust's tweets](https://twitter.com/ioorbust).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 789 |
| Retweets | 79 |
| Short tweets | 102 |
| Tweets kept | 608 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zuxd4c8i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ioorbust's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nt569uh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nt569uh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ioorbust')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ioorbust/1617757328084/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ioorbust
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Ada IO AI Bot
@ioorbust bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @ioorbust's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ioorbust's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1144996963252940800/VIHkkMCF_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">⌞ʙᴀʟᴀᴢꜱ⌝ 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@iotnerd bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@iotnerd's tweets](https://twitter.com/iotnerd).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3200</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>915</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>102</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2183</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gq45sm3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iotnerd's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ksu06s41) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ksu06s41/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/iotnerd'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/iotnerd/1611677898375/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/iotnerd
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">⌞ʙᴀʟᴀᴢꜱ⌝ AI Bot </div>
<div style="font-size: 15px; color: #657786">@iotnerd bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @iotnerd's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3200</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>915</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>102</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2183</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @iotnerd's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/iotnerd'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ivan Poduje</div>
<div style="text-align: center; font-size: 14px;">@ipoduje</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ivan Poduje.
| Data | Ivan Poduje |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 1035 |
| Short tweets | 135 |
| Tweets kept | 2060 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gyttyi09/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ipoduje's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29wmg1mk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29wmg1mk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ipoduje')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/ipoduje/1641572179072/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ipoduje
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Ivan Poduje
@ipoduje
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Ivan Poduje.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ipoduje's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Riikka Purra</div>
<div style="text-align: center; font-size: 14px;">@ir_rkp</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Riikka Purra.
| Data | Riikka Purra |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 141 |
| Short tweets | 78 |
| Tweets kept | 3031 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w0bzvgu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ir_rkp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nj4v31w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nj4v31w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ir_rkp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/ir_rkp/1643976228944/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ir_rkp
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Riikka Purra
@ir\_rkp
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Riikka Purra.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ir\_rkp's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Kevin 🤖 AI Bot </div>
<div style="font-size: 15px">@is_he_batman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@is_he_batman's tweets](https://twitter.com/is_he_batman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 960 |
| Retweets | 51 |
| Short tweets | 75 |
| Tweets kept | 834 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25g6159m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @is_he_batman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yerrfcg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yerrfcg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/is_he_batman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/is_he_batman/1614109879160/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/is_he_batman
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Kevin AI Bot
@is\_he\_batman bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @is\_he\_batman's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @is\_he\_batman's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ishan 🤖 AI Bot </div>
<div style="font-size: 15px">@ishanspatil bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ishanspatil's tweets](https://twitter.com/ishanspatil).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2468 |
| Retweets | 346 |
| Short tweets | 231 |
| Tweets kept | 1891 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4iupc1l1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ishanspatil's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/k7nyg63n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/k7nyg63n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ishanspatil')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ishanspatil/1617782474953/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ishanspatil
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Ishan AI Bot
@ishanspatil bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @ishanspatil's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ishanspatil's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">نورهان</div>
<div style="text-align: center; font-size: 14px;">@islamocommunism</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from نورهان.
| Data | نورهان |
| --- | --- |
| Tweets downloaded | 3196 |
| Retweets | 1205 |
| Short tweets | 227 |
| Tweets kept | 1764 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2l8ikj22/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @islamocommunism's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kngkxcq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kngkxcq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/islamocommunism')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/islamocommunism/1635014280450/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/islamocommunism
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
نورهان
@islamocommunism
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from نورهان.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @islamocommunism's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1368077075127603200/Z08slO2P_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Boston Psychology PhD & keyvan</div>
<div style="text-align: center; font-size: 14px;">@islamphobiacow-praisegodbarbon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Boston Psychology PhD & keyvan.
| Data | Boston Psychology PhD | keyvan |
| --- | --- | --- |
| Tweets downloaded | 3224 | 3242 |
| Retweets | 858 | 179 |
| Short tweets | 251 | 223 |
| Tweets kept | 2115 | 2840 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3egvdux4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @islamphobiacow-praisegodbarbon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34hmjrwi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34hmjrwi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/islamphobiacow-praisegodbarbon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/islamphobiacow-praisegodbarbon/1627056382131/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/islamphobiacow-praisegodbarbon
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Boston Psychology PhD & keyvan
@islamphobiacow-praisegodbarbon
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Boston Psychology PhD & keyvan.
Data: Tweets downloaded, Boston Psychology PhD: 3224, keyvan: 3242
Data: Retweets, Boston Psychology PhD: 858, keyvan: 179
Data: Short tweets, Boston Psychology PhD: 251, keyvan: 223
Data: Tweets kept, Boston Psychology PhD: 2115, keyvan: 2840
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @islamphobiacow-praisegodbarbon's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">beff jezos</div>
<div style="text-align: center; font-size: 14px;">@islamphobiacow</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from beff jezos.
| Data | beff jezos |
| --- | --- |
| Tweets downloaded | 395 |
| Retweets | 36 |
| Short tweets | 37 |
| Tweets kept | 322 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1crtakdb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @islamphobiacow's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29lljwti) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29lljwti/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/islamphobiacow')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/islamphobiacow/1627597861566/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/islamphobiacow
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
beff jezos
@islamphobiacow
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from beff jezos.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @islamphobiacow's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Rizza Islam 🤖 AI Bot </div>
<div style="font-size: 15px">@islamrizza bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@islamrizza's tweets](https://twitter.com/islamrizza).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3195 |
| Retweets | 73 |
| Short tweets | 394 |
| Tweets kept | 2728 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/t09cn5o0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @islamrizza's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m6l6wkff) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m6l6wkff/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/islamrizza')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/islamrizza/1619378181874/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/islamrizza
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Rizza Islam AI Bot
@islamrizza bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @islamrizza's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @islamrizza's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">nick casino 🤖 AI Bot </div>
<div style="font-size: 15px">@island_iverson bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@island_iverson's tweets](https://twitter.com/island_iverson).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3182 |
| Retweets | 367 |
| Short tweets | 193 |
| Tweets kept | 2622 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dlr58v3e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @island_iverson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vy3qci6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vy3qci6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/island_iverson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/island_iverson/1614113195211/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/island_iverson
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
nick casino AI Bot
@island\_iverson bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @island\_iverson's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @island\_iverson's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1341001037142999041/h86Ch8TO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Science Bits & International Science Teaching Foundation</div>
<div style="text-align: center; font-size: 14px;">@istfoundation-sciencebits</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Science Bits & International Science Teaching Foundation.
| Data | Science Bits | International Science Teaching Foundation |
| --- | --- | --- |
| Tweets downloaded | 2741 | 163 |
| Retweets | 759 | 103 |
| Short tweets | 47 | 1 |
| Tweets kept | 1935 | 59 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c9crff9r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @istfoundation-sciencebits's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2c68vj42) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2c68vj42/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/istfoundation-sciencebits')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/istfoundation-sciencebits/1634209108264/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/istfoundation-sciencebits
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
Science Bits & International Science Teaching Foundation
@istfoundation-sciencebits
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Science Bits & International Science Teaching Foundation.
Data: Tweets downloaded, Science Bits: 2741, International Science Teaching Foundation: 163
Data: Retweets, Science Bits: 759, International Science Teaching Foundation: 103
Data: Short tweets, Science Bits: 47, International Science Teaching Foundation: 1
Data: Tweets kept, Science Bits: 1935, International Science Teaching Foundation: 59
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @istfoundation-sciencebits's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">itemLabel 🤖 AI Bot </div>
<div style="font-size: 15px">@itemlabel bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@itemlabel's tweets](https://twitter.com/itemlabel).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3188 |
| Retweets | 1796 |
| Short tweets | 389 |
| Tweets kept | 1003 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/10hookja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itemlabel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1u63m0wj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1u63m0wj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itemlabel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itemlabel
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
itemLabel AI Bot
@itemlabel bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @itemlabel's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itemlabel's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Google ‘Its All Bullshit’ 🤖 AI Bot </div>
<div style="font-size: 15px">@itsall_bullshit bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@itsall_bullshit's tweets](https://twitter.com/itsall_bullshit).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3158 |
| Retweets | 1762 |
| Short tweets | 98 |
| Tweets kept | 1298 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25y8c5ov/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsall_bullshit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/y0ks8zfn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/y0ks8zfn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsall_bullshit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itsall_bullshit/1617823122662/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itsall_bullshit
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Google ‘Its All Bullshit’ AI Bot
@itsall\_bullshit bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @itsall\_bullshit's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itsall\_bullshit's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Big Ian 🤖 AI Bot </div>
<div style="font-size: 15px">@itsbigian bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@itsbigian's tweets](https://twitter.com/itsbigian).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 218 |
| Short tweets | 552 |
| Tweets kept | 2468 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2oczo3b8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsbigian's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/245obnds) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/245obnds/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsbigian')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itsbigian/1616883483325/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itsbigian
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Big Ian AI Bot
@itsbigian bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @itsbigian's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itsbigian's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Harveen 🤖 AI Bot </div>
<div style="font-size: 15px">@itsharveen bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@itsharveen's tweets](https://twitter.com/itsharveen).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 632 |
| Retweets | 30 |
| Short tweets | 40 |
| Tweets kept | 562 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/a779ia8t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsharveen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1dip1d5b) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1dip1d5b/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsharveen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itsharveen/1617627052674/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itsharveen
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Harveen AI Bot
@itsharveen bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @itsharveen's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itsharveen's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">jane.flowers 🤖 AI Bot </div>
<div style="font-size: 15px">@itsjaneflowers bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@itsjaneflowers's tweets](https://twitter.com/itsjaneflowers).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1054 |
| Retweets | 166 |
| Short tweets | 79 |
| Tweets kept | 809 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1af8sp4r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsjaneflowers's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25kv3ol0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25kv3ol0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsjaneflowers')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itsjaneflowers/1616859152962/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itsjaneflowers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
jane.flowers AI Bot
@itsjaneflowers bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @itsjaneflowers's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itsjaneflowers's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">june party corner</div>
<div style="text-align: center; font-size: 14px;">@itskillerdog</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from june party corner.
| Data | june party corner |
| --- | --- |
| Tweets downloaded | 196 |
| Retweets | 20 |
| Short tweets | 30 |
| Tweets kept | 146 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1u7twx27/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itskillerdog's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vg0bbs8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vg0bbs8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itskillerdog')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itskillerdog/1630971994166/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itskillerdog
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
june party corner
@itskillerdog
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from june party corner.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itskillerdog's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Luci Keller 🤖 AI Bot </div>
<div style="font-size: 15px">@itslucikeller bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@itslucikeller's tweets](https://twitter.com/itslucikeller).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 69 |
| Short tweets | 352 |
| Tweets kept | 2825 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nhr24ju/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itslucikeller's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2zv0hvjq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2zv0hvjq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itslucikeller')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itslucikeller/1616622417664/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itslucikeller
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Luci Keller AI Bot
@itslucikeller bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @itslucikeller's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itslucikeller's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Aqsa.</div>
<div style="text-align: center; font-size: 14px;">@itsmeaqsaa</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Aqsa..
| Data | Aqsa. |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 77 |
| Short tweets | 1543 |
| Tweets kept | 1626 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xy28krg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsmeaqsaa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18kg27bt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18kg27bt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsmeaqsaa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itsmeaqsaa/1631734394856/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itsmeaqsaa
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Aqsa.
@itsmeaqsaa
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Aqsa..
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itsmeaqsaa's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">NFT ChiΞf of Staff 🤖 AI Bot </div>
<div style="font-size: 15px">@itspublu bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@itspublu's tweets](https://twitter.com/itspublu).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1768 |
| Retweets | 481 |
| Short tweets | 282 |
| Tweets kept | 1005 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2l8q7e87/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itspublu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vo0wnnt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vo0wnnt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itspublu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itspublu/1616709602963/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itspublu
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
NFT ChiΞf of Staff AI Bot
@itspublu bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @itspublu's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itspublu's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Six words story</div>
<div style="text-align: center; font-size: 14px;">@itssixword</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Six words story.
| Data | Six words story |
| --- | --- |
| Tweets downloaded | 282 |
| Retweets | 0 |
| Short tweets | 2 |
| Tweets kept | 280 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dbtmbzz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itssixword's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wydugsv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wydugsv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itssixword')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/itssixword/1629833127428/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/itssixword
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Six words story
@itssixword
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Six words story.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @itssixword's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">uditgoenka.eth</div>
<div style="text-align: center; font-size: 14px;">@iuditg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from uditgoenka.eth.
| Data | uditgoenka.eth |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 993 |
| Short tweets | 450 |
| Tweets kept | 1807 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1r2lhfr0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iuditg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/iswph9y4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/iswph9y4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iuditg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/iuditg/1639532212187/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/iuditg
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
URL
@iuditg
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from URL.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @iuditg's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1202257345734037504/tRJA6HEx_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">| praveen narayan 〉 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@ivanpeer bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ivanpeer's tweets](https://twitter.com/ivanpeer).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>971</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>110</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>102</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>759</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2thafoo8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ivanpeer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3fepz7hm) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3fepz7hm/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ivanpeer'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ivanpeer/1603607581850/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ivanpeer
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">| praveen narayan 〉 AI Bot </div>
<div style="font-size: 15px; color: #657786">@ivanpeer bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @ivanpeer's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>971</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>110</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>102</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>759</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @ivanpeer's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ivanpeer'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">##lainpilled</div>
<div style="text-align: center; font-size: 14px;">@ivegottagetagf</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ##lainpilled.
| Data | ##lainpilled |
| --- | --- |
| Tweets downloaded | 128 |
| Retweets | 7 |
| Short tweets | 16 |
| Tweets kept | 105 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7kyd6ojb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ivegottagetagf's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ropyewj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ropyewj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ivegottagetagf')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/ivegottagetagf/1623876885491/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/ivegottagetagf
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
##lainpilled
@ivegottagetagf
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from ##lainpilled.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @ivegottagetagf's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert Evans (The Only Robert Evans)</div>
<div style="text-align: center; font-size: 14px;">@iwriteok</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert Evans (The Only Robert Evans).
| Data | Robert Evans (The Only Robert Evans) |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 1269 |
| Short tweets | 142 |
| Tweets kept | 1807 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3hjcp2ib/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iwriteok's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wq4n95ia) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wq4n95ia/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iwriteok')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/iwriteok/1668924855688/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/iwriteok
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Robert Evans (The Only Robert Evans)
@iwriteok
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Robert Evans (The Only Robert Evans).
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @iwriteok's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">웃</div>
<div style="text-align: center; font-size: 14px;">@iyxnmt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 웃.
| Data | 웃 |
| --- | --- |
| Tweets downloaded | 3073 |
| Retweets | 1416 |
| Short tweets | 660 |
| Tweets kept | 997 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lpd2izx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iyxnmt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qg153k0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qg153k0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iyxnmt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/iyxnmt/1621146502054/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/iyxnmt
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
웃
@iyxnmt
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from 웃.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @iyxnmt's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jamie Beck 🤖 AI Bot </div>
<div style="font-size: 15px">@j_beck00 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@j_beck00's tweets](https://twitter.com/j_beck00).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 75 |
| Retweets | 14 |
| Short tweets | 4 |
| Tweets kept | 57 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23mq58mv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @j_beck00's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mbmtl4r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mbmtl4r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/j_beck00')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/j_beck00/1617471704579/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/j_beck00
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jamie Beck AI Bot
@j\_beck00 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @j\_beck00's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @j\_beck00's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1333957151576887297/_1ExBQa3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jocelyn (male) of the 365 Followers 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@j_j_j_j_j_jones bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@j_j_j_j_j_jones's tweets](https://twitter.com/j_j_j_j_j_jones).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3225</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>320</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>482</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2423</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/uz60miha/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @j_j_j_j_j_jones's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/soi1lw7l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/soi1lw7l/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/j_j_j_j_j_jones'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/j_j_j_j_j_jones/1609141746129/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/j_j_j_j_j_jones
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jocelyn (male) of the 365 Followers AI Bot </div>
<div style="font-size: 15px; color: #657786">@j_j_j_j_j_jones bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @j_j_j_j_j_jones's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3225</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>320</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>482</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2423</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @j_j_j_j_j_jones's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/j_j_j_j_j_jones'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jack</div>
<div style="text-align: center; font-size: 14px;">@jack</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jack.
| Data | jack |
| --- | --- |
| Tweets downloaded | 3231 |
| Retweets | 1147 |
| Short tweets | 817 |
| Tweets kept | 1267 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dibfzjll/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jack's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3f3e0roo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3f3e0roo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jack')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/jack/1653287961086/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jack
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
jack
@jack
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from jack.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jack's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jack Walsh 🤖 AI Bot </div>
<div style="font-size: 15px">@jack_walshh bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jack_walshh's tweets](https://twitter.com/jack_walshh).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1095 |
| Retweets | 234 |
| Short tweets | 121 |
| Tweets kept | 740 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1o93caoq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jack_walshh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23dq75x4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23dq75x4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jack_walshh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jack_walshh/1616646386178/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jack_walshh
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jack Walsh AI Bot
@jack\_walshh bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jack\_walshh's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jack\_walshh's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1384243878748856321/vreel6UH_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1417910390051246080/wKq6pjPR_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DAN KOE & humble farmer & Jack Butcher</div>
<div style="text-align: center; font-size: 14px;">@jackbutcher-paikcapital-thedankoe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DAN KOE & humble farmer & Jack Butcher.
| Data | DAN KOE | humble farmer | Jack Butcher |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3247 | 3220 |
| Retweets | 18 | 601 | 208 |
| Short tweets | 899 | 500 | 1048 |
| Tweets kept | 2332 | 2146 | 1964 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mvqun4ol/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jackbutcher-paikcapital-thedankoe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qd8720q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qd8720q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jackbutcher-paikcapital-thedankoe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jackbutcher-paikcapital-thedankoe
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
DAN KOE & humble farmer & Jack Butcher
@jackbutcher-paikcapital-thedankoe
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from DAN KOE & humble farmer & Jack Butcher.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jackbutcher-paikcapital-thedankoe's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/726446881547517952/ULhSTKxN_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jack Clark 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@jackclarksf bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://bit.ly/2TGXMZf).
## Training data
The model was trained on [@jackclarksf's tweets](https://twitter.com/jackclarksf).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3216</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>603</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>187</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2426</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3r89xyps/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jackclarksf's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3ovybsy5) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3ovybsy5/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jackclarksf'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jackclarksf
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jack Clark AI Bot </div>
<div style="font-size: 15px; color: #657786">@jackclarksf bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @jackclarksf's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3216</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>603</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>187</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2426</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @jackclarksf's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jackclarksf'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">JackGordon 🤖 AI Bot </div>
<div style="font-size: 15px">@jackgordonyt bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jackgordonyt's tweets](https://twitter.com/jackgordonyt).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 660 |
| Retweets | 146 |
| Short tweets | 106 |
| Tweets kept | 408 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3d7wzfbd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jackgordonyt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fa0cjwj6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fa0cjwj6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jackgordonyt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jackgordonyt/1615830241451/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jackgordonyt
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
JackGordon AI Bot
@jackgordonyt bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jackgordonyt's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jackgordonyt's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JackieRacc_VTuber</div>
<div style="text-align: center; font-size: 14px;">@jackieracc_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JackieRacc_VTuber.
| Data | JackieRacc_VTuber |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 252 |
| Short tweets | 827 |
| Tweets kept | 2170 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gx7e8h18/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jackieracc_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1cvwo68s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1cvwo68s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jackieracc_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jackieracc_/1620680912006/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jackieracc_
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
JackieRacc\_VTuber
@jackieracc\_
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from JackieRacc\_VTuber.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jackieracc\_'s tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1026642891374874625/GPdw8p_L_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jacknjellify 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@jacknjellify bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jacknjellify's tweets](https://twitter.com/jacknjellify).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3103</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1025</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>336</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1742</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/nmeryp1f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jacknjellify's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3q5b8kag) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3q5b8kag/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jacknjellify'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jacknjellify
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jacknjellify AI Bot </div>
<div style="font-size: 15px; color: #657786">@jacknjellify bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @jacknjellify's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3103</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1025</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>336</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1742</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @jacknjellify's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jacknjellify'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jack Posobiec 🇺🇸</div>
<div style="text-align: center; font-size: 14px;">@jackposobiec</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jack Posobiec 🇺🇸.
| Data | Jack Posobiec 🇺🇸 |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 818 |
| Short tweets | 511 |
| Tweets kept | 1917 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3s4mnium/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jackposobiec's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vllrmfa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vllrmfa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jackposobiec')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jackposobiec/1630169093455/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jackposobiec
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Jack Posobiec 🇺🇸
@jackposobiec
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Jack Posobiec 🇺🇸.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jackposobiec's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jacksfilms🌹</div>
<div style="text-align: center; font-size: 14px;">@jacksfilms</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jacksfilms🌹.
| Data | jacksfilms🌹 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 97 |
| Short tweets | 444 |
| Tweets kept | 2708 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hsenlsv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jacksfilms's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ow20675) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ow20675/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jacksfilms')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/jacksfilms/1653095886748/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jacksfilms
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
jacksfilms
@jacksfilms
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from jacksfilms.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jacksfilms's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1284100202421342209/MVXATULR_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Day6 Jae 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@jae_day6 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jae_day6's tweets](https://twitter.com/jae_day6).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3229</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>123</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>1021</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2085</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3lpvhxwq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jae_day6's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3vyjrutx) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3vyjrutx/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jae_day6'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jae_day6/1601274497991/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jae_day6
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Day6 Jae AI Bot </div>
<div style="font-size: 15px; color: #657786">@jae_day6 bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @jae_day6's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3229</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>123</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>1021</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2085</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @jae_day6's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jae_day6'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Programo, luego existo</div>
<div style="text-align: center; font-size: 14px;">@jagedn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Programo, luego existo.
| Data | Programo, luego existo |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 549 |
| Short tweets | 220 |
| Tweets kept | 2475 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ptz28obp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jagedn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1i8g6srp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1i8g6srp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jagedn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jagedn/1625062317603/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jagedn
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Programo, luego existo
@jagedn
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Programo, luego existo.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jagedn's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">certified cael moment™ 🔜 BLFC 🤖 AI Bot </div>
<div style="font-size: 15px">@jaguarunlocked bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jaguarunlocked's tweets](https://twitter.com/jaguarunlocked).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3176 |
| Retweets | 1521 |
| Short tweets | 203 |
| Tweets kept | 1452 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2j5t38f8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jaguarunlocked's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3n6tm7lj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3n6tm7lj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jaguarunlocked')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jaguarunlocked/1617770655879/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jaguarunlocked
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
certified cael moment™ BLFC AI Bot
@jaguarunlocked bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jaguarunlocked's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jaguarunlocked's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">jacob 🤖 AI Bot </div>
<div style="font-size: 15px">@jakeaccino bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jakeaccino's tweets](https://twitter.com/jakeaccino).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 179 |
| Retweets | 8 |
| Short tweets | 53 |
| Tweets kept | 118 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/239ufxkc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jakeaccino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3myo5k1y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3myo5k1y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jakeaccino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jakeaccino
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
jacob AI Bot
@jakeaccino bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jakeaccino's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jakeaccino's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1253134985948614657/xN4lDF3W_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">James Cham ✍🏻 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@jamescham bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://bit.ly/2TGXMZf).
## Training data
The model was trained on [@jamescham's tweets](https://twitter.com/jamescham).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3213</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>744</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>317</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2152</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/20ku8js2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamescham's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/32to3ioi) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/32to3ioi/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jamescham'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jamescham
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">James Cham AI Bot </div>
<div style="font-size: 15px; color: #657786">@jamescham bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @jamescham's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3213</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>744</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>317</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2152</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @jamescham's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jamescham'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1324782032124215296/HMG6-q8g_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1401837042934468611/okzqIoMb_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CANCELLED & James Charles & Logan Paul</div>
<div style="text-align: center; font-size: 14px;">@jamescharles-loganpaul-tanamongeau</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CANCELLED & James Charles & Logan Paul.
| Data | CANCELLED | James Charles | Logan Paul |
| --- | --- | --- | --- |
| Tweets downloaded | 3167 | 3182 | 3246 |
| Retweets | 938 | 480 | 98 |
| Short tweets | 522 | 496 | 287 |
| Tweets kept | 1707 | 2206 | 2861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2avr905u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamescharles-loganpaul-tanamongeau's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2at101p1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2at101p1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jamescharles-loganpaul-tanamongeau')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jamescharles-loganpaul-tanamongeau/1631598787303/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jamescharles-loganpaul-tanamongeau
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI CYBORG
CANCELLED & James Charles & Logan Paul
@jamescharles-loganpaul-tanamongeau
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from CANCELLED & James Charles & Logan Paul.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jamescharles-loganpaul-tanamongeau's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">James Clear 🤖 AI Bot </div>
<div style="font-size: 15px">@jamesclear bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jamesclear's tweets](https://twitter.com/jamesclear).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 190 |
| Short tweets | 385 |
| Tweets kept | 2672 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hvyoab9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamesclear's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/v67076s3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/v67076s3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jamesclear')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jamesclear/1616666243525/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jamesclear
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
James Clear AI Bot
@jamesclear bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jamesclear's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jamesclear's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">James Hutton 🤖 AI Bot </div>
<div style="font-size: 15px">@jameshuttonphil bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jameshuttonphil's tweets](https://twitter.com/jameshuttonphil).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 648 |
| Retweets | 25 |
| Short tweets | 89 |
| Tweets kept | 534 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bamdk9dm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jameshuttonphil's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jp3j37a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jp3j37a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jameshuttonphil')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jameshuttonphil/1617296338533/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jameshuttonphil
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
James Hutton AI Bot
@jameshuttonphil bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jameshuttonphil's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jameshuttonphil's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">James Sherlock 🤖 AI Bot </div>
<div style="font-size: 15px">@jamespsherlock bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jamespsherlock's tweets](https://twitter.com/jamespsherlock).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 743 |
| Retweets | 260 |
| Short tweets | 44 |
| Tweets kept | 439 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ulatc4k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamespsherlock's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1btltx5f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1btltx5f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jamespsherlock')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jamespsherlock/1616781166201/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jamespsherlock
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
James Sherlock AI Bot
@jamespsherlock bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jamespsherlock's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jamespsherlock's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jamila Husain</div>
<div style="text-align: center; font-size: 14px;">@jamz5251</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jamila Husain.
| Data | Jamila Husain |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 900 |
| Short tweets | 65 |
| Tweets kept | 2269 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/r9z40rld/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamz5251's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20gadkdv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20gadkdv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jamz5251')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jamz5251/1622370618440/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jamz5251
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Jamila Husain
@jamz5251
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Jamila Husain.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jamz5251's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Columbine Janie</div>
<div style="text-align: center; font-size: 14px;">@janieclone</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Columbine Janie.
| Data | Columbine Janie |
| --- | --- |
| Tweets downloaded | 3072 |
| Retweets | 1211 |
| Short tweets | 462 |
| Tweets kept | 1399 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1divgffx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @janieclone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ic6ynmd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ic6ynmd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/janieclone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/janieclone
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Columbine Janie
@janieclone
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Columbine Janie.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @janieclone's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Poolgirl Janie Diamond</div>
<div style="text-align: center; font-size: 14px;">@janiedied</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Poolgirl Janie Diamond.
| Data | Poolgirl Janie Diamond |
| --- | --- |
| Tweets downloaded | 1505 |
| Retweets | 552 |
| Short tweets | 283 |
| Tweets kept | 670 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3232onrl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @janiedied's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/smx9pf1l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/smx9pf1l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/janiedied')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/janiedied/1645111847557/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/janiedied
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Poolgirl Janie Diamond
@janiedied
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Poolgirl Janie Diamond.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @janiedied's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2158604209/feuilles_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Au Jardin 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@jardininfo bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jardininfo's tweets](https://twitter.com/jardininfo).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3200</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>375</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2825</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/48yjj01v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jardininfo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3t0scjqn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3t0scjqn/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jardininfo'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jardininfo/1610568803876/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jardininfo
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Au Jardin AI Bot </div>
<div style="font-size: 15px; color: #657786">@jardininfo bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @jardininfo's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3200</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>375</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2825</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @jardininfo's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/jardininfo'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jason Chen 🤖 AI Bot </div>
<div style="font-size: 15px">@jasonchen0325 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jasonchen0325's tweets](https://twitter.com/jasonchen0325).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 354 |
| Retweets | 30 |
| Short tweets | 9 |
| Tweets kept | 315 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wsqq8bl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jasonchen0325's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w2gqjxjr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w2gqjxjr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jasonchen0325')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jasonchen0325/1616715271094/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jasonchen0325
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jason Chen AI Bot
@jasonchen0325 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jasonchen0325's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jasonchen0325's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">J.A. Sutherland SciFi Books 🤖 AI Bot </div>
<div style="font-size: 15px">@jasutherlandbks bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jasutherlandbks's tweets](https://twitter.com/jasutherlandbks).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3192 |
| Retweets | 952 |
| Short tweets | 169 |
| Tweets kept | 2071 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/210hhn5z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jasutherlandbks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23qtgnsl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23qtgnsl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jasutherlandbks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jasutherlandbks/1616634473974/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jasutherlandbks
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
J.A. Sutherland SciFi Books AI Bot
@jasutherlandbks bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jasutherlandbks's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jasutherlandbks's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🧠Jattazo Shin🧠 !!COMMISSIONS OPEN!!</div>
<div style="text-align: center; font-size: 14px;">@jattazo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🧠Jattazo Shin🧠 !!COMMISSIONS OPEN!!.
| Data | 🧠Jattazo Shin🧠 !!COMMISSIONS OPEN!! |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 196 |
| Short tweets | 757 |
| Tweets kept | 2290 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/oc8tbgql/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jattazo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7n3lt4bb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7n3lt4bb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jattazo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jattazo/1620679164511/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jattazo
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Jattazo Shin !!COMMISSIONS OPEN!!
@jattazo
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Jattazo Shin !!COMMISSIONS OPEN!!.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jattazo's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">🧠Jattazo Shin🧠 🤖 AI Bot </div>
<div style="font-size: 15px">@jattazoshin bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jattazoshin's tweets](https://twitter.com/jattazoshin).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2768 |
| Retweets | 179 |
| Short tweets | 414 |
| Tweets kept | 2175 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gto8yaa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jattazoshin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gdg6xx3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gdg6xx3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jattazoshin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jattazoshin/1613105660546/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jattazoshin
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jattazo Shin AI Bot
@jattazoshin bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jattazoshin's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jattazoshin's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cool Narcissist 🤖 AI Bot </div>
<div style="font-size: 15px">@java_jigga bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@java_jigga's tweets](https://twitter.com/java_jigga).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 313 |
| Short tweets | 426 |
| Tweets kept | 2507 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kvpyc8u1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @java_jigga's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6p3ishch) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6p3ishch/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/java_jigga')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/java_jigga/1617788084385/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/java_jigga
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Cool Narcissist AI Bot
@java\_jigga bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @java\_jigga's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @java\_jigga's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Javi Ballester 🤖 AI Bot </div>
<div style="font-size: 15px">@javiballester4 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@javiballester4's tweets](https://twitter.com/javiballester4).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 369 |
| Retweets | 4 |
| Short tweets | 108 |
| Tweets kept | 257 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33xklndf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @javiballester4's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s6kbzp61) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s6kbzp61/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/javiballester4')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/javiballester4/1616630748333/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/javiballester4
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Javi Ballester AI Bot
@javiballester4 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @javiballester4's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @javiballester4's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Javierhalamadrid 🤖 AI Bot </div>
<div style="font-size: 15px">@javierito321 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@javierito321's tweets](https://twitter.com/javierito321).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 144 |
| Short tweets | 90 |
| Tweets kept | 3009 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36rnr3vs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @javierito321's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bj56pvfw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bj56pvfw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/javierito321')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/javierito321/1617016242704/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/javierito321
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Javierhalamadrid AI Bot
@javierito321 bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @javierito321's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @javierito321's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
 {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/922569683420794880/bk2ERDe2_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gabor Javorszky 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@javorszky bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@javorszky's tweets](https://twitter.com/javorszky).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3137</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>2139</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>67</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>931</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1cyr2cuz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @javorszky's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2sa503ur) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2sa503ur/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/javorszky'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file -->
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/javorszky/1602234108282/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/javorszky
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<link rel="stylesheet" href="URL
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('URL
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gabor Javorszky AI Bot </div>
<div style="font-size: 15px; color: #657786">@javorszky bot</div>
</div>
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
## How does it work?
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
## Training data
The model was trained on @javorszky's tweets.
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3137</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>2139</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>67</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>931</td>
</tr>
</tbody>
</table>
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
## Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @javorszky's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/javorszky'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n\ngenerator(<span style=\"color:#FF9800\">\"My dream is\"</span>, num_return_sequences=<span style=\"color:#8BC34A\">5</span>)</code></pre>",
"### Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.",
"## About\n\n*Built by Boris Dayma*\n\n</section>\n\n">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jay Alammar</div>
<div style="text-align: center; font-size: 14px;">@jayalammar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jay Alammar.
| Data | Jay Alammar |
| --- | --- |
| Tweets downloaded | 692 |
| Retweets | 198 |
| Short tweets | 35 |
| Tweets kept | 459 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wf3zug3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jayalammar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hq8g8xlh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hq8g8xlh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jayalammar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "http://www.huggingtweets.com/jayalammar/1638460288971/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jayalammar
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AI BOT
Jay Alammar
@jayalammar
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on tweets from Jay Alammar.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jayalammar's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jasmine Persephone ☭ Black Podcast Revolution 🤖 AI Bot </div>
<div style="font-size: 15px">@jazzpomegranate bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jazzpomegranate's tweets](https://twitter.com/jazzpomegranate).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3208 |
| Retweets | 184 |
| Short tweets | 720 |
| Tweets kept | 2304 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/312m9owm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jazzpomegranate's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/jvni6p8a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/jvni6p8a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jazzpomegranate')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jazzpomegranate/1614106581220/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jazzpomegranate
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jasmine Persephone Black Podcast Revolution AI Bot
@jazzpomegranate bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jazzpomegranate's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jazzpomegranate's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jon Beasley-Murray 🤖 AI Bot </div>
<div style="font-size: 15px">@jbmurray bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jbmurray's tweets](https://twitter.com/jbmurray).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2861 |
| Retweets | 364 |
| Short tweets | 260 |
| Tweets kept | 2237 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yppksvx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jbmurray's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/dqw1zvsq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/dqw1zvsq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jbmurray')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jbmurray/1617246417542/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jbmurray
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jon Beasley-Murray AI Bot
@jbmurray bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jbmurray's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jbmurray's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Jordan Peterson Quotes 🤖 AI Bot </div>
<div style="font-size: 15px">@jbpetersonquote bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jbpetersonquote's tweets](https://twitter.com/jbpetersonquote).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1983 |
| Retweets | 605 |
| Short tweets | 47 |
| Tweets kept | 1331 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1n1ihdfe/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jbpetersonquote's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qijh16v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qijh16v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jbpetersonquote')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
{"language": "en", "tags": ["huggingtweets"], "thumbnail": "https://www.huggingtweets.com/jbpetersonquote/1620104584619/predictions.png", "widget": [{"text": "My dream is"}]}
|
text-generation
|
huggingtweets/jbpetersonquote
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Jordan Peterson Quotes AI Bot
@jbpetersonquote bot
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
-----------------
The model uses the following pipeline.
!pipeline
To understand how the model was developed, check the W&B report.
Training data
-------------
The model was trained on @jbpetersonquote's tweets.
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
------------------
The model is based on a pre-trained GPT-2 which is fine-tuned on @jbpetersonquote's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
How to use
----------
You can use this model directly with a pipeline for text generation:
Limitations and bias
--------------------
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
-----
*Built by Boris Dayma*
![Follow](URL
For more details, visit the project repository.
![GitHub stars](URL
|
[] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
[
57
] |
[
"passage: TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingtweets #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
[
0.004547144751995802,
-0.006708405911922455,
-0.007013476919382811,
0.01947171241044998,
0.15818242728710175,
0.03448796644806862,
0.08709780126810074,
0.15389476716518402,
-0.019877297803759575,
-0.022431448101997375,
0.18047170341014862,
0.173692986369133,
-0.012988686561584473,
0.09047263860702515,
-0.05271327868103981,
-0.2622397541999817,
0.03682216629385948,
0.05513067543506622,
-0.007422737777233124,
0.14252057671546936,
0.07580838352441788,
-0.023790201172232628,
0.11380083113908768,
-0.02966974675655365,
-0.202972412109375,
0.03197307139635086,
0.0615268237888813,
-0.09518525749444962,
0.11083168536424637,
0.04628797993063927,
0.08698221296072006,
0.022143812850117683,
-0.07331052422523499,
-0.120787613093853,
0.04532235115766525,
0.045263588428497314,
-0.06358368694782257,
0.06480421870946884,
0.08820623904466629,
-0.1065920814871788,
0.1416475921869278,
0.07373794168233871,
-0.01588049717247486,
0.07824484258890152,
-0.17789237201213837,
-0.03725104406476021,
-0.036331940442323685,
0.007741300854831934,
0.07058489322662354,
0.0750737413764,
-0.019116664305329323,
0.1746976524591446,
-0.06598041951656342,
0.09777773916721344,
0.17528840899467468,
-0.2887236773967743,
-0.018040433526039124,
0.0492081381380558,
0.0887371376156807,
0.04900359362363815,
-0.024227341637015343,
0.08339477330446243,
0.06365471333265305,
0.01686069741845131,
0.014271941967308521,
-0.06960906833410263,
-0.09346919506788254,
0.03645368665456772,
-0.06932124495506287,
-0.05699722096323967,
0.22001419961452484,
-0.0334535576403141,
0.04674676060676575,
-0.03953840583562851,
-0.09316058456897736,
-0.028927378356456757,
-0.027232296764850616,
-0.00907184462994337,
-0.05413005128502846,
0.08754174411296844,
-0.015151693485677242,
-0.06331931799650192,
-0.1435878872871399,
-0.012912428006529808,
-0.15805892646312714,
0.13816505670547485,
0.004333257209509611,
0.04586424678564072,
-0.22094038128852844,
0.1012546569108963,
0.022817784920334816,
-0.08995530754327774,
0.04930093511939049,
-0.09425957500934601,
0.0717538446187973,
0.0007676240638829768,
-0.04885277524590492,
-0.02944779396057129,
0.08848895877599716,
0.14690880477428436,
-0.02718975953757763,
0.005980455316603184,
-0.01338018849492073,
0.0733228251338005,
0.059399042278528214,
0.028748195618391037,
-0.006081049330532551,
-0.052236080169677734,
0.05618719011545181,
-0.1417204737663269,
-0.010511515662074089,
-0.07227712869644165,
-0.10605388879776001,
-0.04232237488031387,
0.03443120792508125,
0.060671232640743256,
0.042230576276779175,
0.11220116913318634,
-0.04771716892719269,
-0.01857093721628189,
0.05281376466155052,
-0.03979083523154259,
-0.008994937874376774,
-0.01990325190126896,
0.018122754991054535,
0.13074275851249695,
-0.019943278282880783,
0.03407962992787361,
-0.10256942361593246,
0.05431444197893143,
-0.10281401127576828,
-0.01971535198390484,
-0.014149561524391174,
-0.04367954283952713,
0.031883664429187775,
-0.12165860831737518,
0.016123656183481216,
-0.16833168268203735,
-0.14714312553405762,
0.002859292319044471,
-0.016588665544986725,
-0.017911825329065323,
-0.07954888790845871,
-0.04400517791509628,
-0.02466505579650402,
0.06924423575401306,
-0.04276731237769127,
-0.00935916043817997,
-0.05846982076764107,
0.11090090870857239,
-0.05349889397621155,
0.07203050702810287,
-0.1194647029042244,
0.0557217076420784,
-0.14930842816829681,
-0.013004516251385212,
-0.04842504858970642,
0.07119924575090408,
0.015398351475596428,
0.1813964694738388,
-0.006925920024514198,
-0.003623353084549308,
-0.09382472932338715,
0.06455672532320023,
-0.02733452245593071,
0.24096953868865967,
-0.0756828561425209,
-0.14226967096328735,
0.21630549430847168,
-0.06334739923477173,
-0.14993034303188324,
0.1314547061920166,
0.01843975856900215,
0.08251222223043442,
0.10434340685606003,
0.19023460149765015,
0.01808990351855755,
-0.007808534894138575,
0.054424818605184555,
0.07603957504034042,
-0.1683882623910904,
-0.033340878784656525,
0.0012923459289595485,
-0.00014291972911451012,
-0.1366809904575348,
0.04632483050227165,
0.1230006217956543,
0.09730340540409088,
-0.07249721139669418,
-0.018487868830561638,
-0.030607668682932854,
0.0016078021144494414,
0.04144361615180969,
-0.0005212334799580276,
0.09951234608888626,
-0.1033509373664856,
-0.04366454482078552,
-0.06751791387796402,
-0.002970147645100951,
0.011176802217960358,
0.03924661502242088,
-0.04455869272351265,
0.09700342267751694,
-0.007412149105221033,
0.0545678474009037,
-0.13708296418190002,
-0.07981666922569275,
-0.016090448945760727,
0.1597585678100586,
0.040224816650152206,
0.04663374274969101,
0.0566885769367218,
-0.05624469742178917,
-0.015493324026465416,
-0.010199432261288166,
0.16243304312229156,
-0.04404180869460106,
-0.07694169133901596,
-0.07860849797725677,
0.10474636405706406,
-0.06389671564102173,
0.026263169944286346,
-0.051667314022779465,
0.024654213339090347,
0.04686986654996872,
0.1110762283205986,
0.004046999383717775,
0.026442723348736763,
-0.012835992500185966,
-0.007690808270126581,
-0.07657550275325775,
-0.01617686077952385,
0.1077079176902771,
-0.0017721779877319932,
-0.06809886544942856,
0.2437063455581665,
-0.16884316504001617,
0.21163912117481232,
0.20976658165454865,
-0.2492678016424179,
-0.02882898785173893,
-0.04848965257406235,
-0.04766342043876648,
-0.0012878701090812683,
0.06041788309812546,
-0.034700244665145874,
0.09027024358510971,
-0.03288675472140312,
0.16564396023750305,
-0.051203593611717224,
-0.07646744698286057,
0.019007064402103424,
-0.05823178589344025,
-0.05114857107400894,
0.07018019258975983,
0.08213616907596588,
-0.1630844622850418,
0.18756183981895447,
0.21879082918167114,
0.06839460134506226,
0.2044064849615097,
0.00858453568071127,
-0.010656360536813736,
0.07200875878334045,
-0.04608747735619545,
-0.03843220695853233,
-0.06601633131504059,
-0.15238076448440552,
-0.03009703755378723,
0.06625645607709885,
0.030863380059599876,
0.09900964051485062,
-0.09019728004932404,
-0.08104760944843292,
-0.017665131017565727,
0.004776675254106522,
0.00156646769028157,
0.11991100758314133,
0.03676433861255646,
0.13820022344589233,
-0.01955524832010269,
0.022415857762098312,
0.08040772378444672,
0.016582515090703964,
-0.10843544453382492,
0.16101348400115967,
-0.13329310715198517,
-0.3788211941719055,
-0.14546175301074982,
-0.13134250044822693,
-0.020925991237163544,
0.03777816519141197,
0.1120775043964386,
-0.1329103261232376,
0.005511005409061909,
-0.007893978618085384,
0.10391844809055328,
-0.08707519620656967,
0.039245378226041794,
-0.07586963474750519,
0.0314689576625824,
-0.060405436903238297,
-0.07552991807460785,
-0.03722400963306427,
-0.028465405106544495,
-0.09132689982652664,
0.16675986349582672,
-0.11130212247371674,
0.06035055220127106,
0.16001324355602264,
0.021197395399212837,
0.03523072600364685,
-0.05174810439348221,
0.18330632150173187,
-0.112345851957798,
0.020098978653550148,
0.15624848008155823,
-0.013005592860281467,
0.08254575729370117,
0.08188403397798538,
-0.013132697902619839,
-0.10316278785467148,
0.05240294709801674,
0.001463406952098012,
-0.10209372639656067,
-0.1950312703847885,
-0.10119245946407318,
-0.08230090886354446,
0.15922248363494873,
0.06361804902553558,
0.058937788009643555,
0.17968137562274933,
0.07578518986701965,
-0.038606274873018265,
-0.00038743947516195476,
-0.00239798822440207,
0.08808282762765884,
0.13635766506195068,
-0.01442645862698555,
0.1225903332233429,
-0.04975935071706772,
-0.10913994163274765,
0.12899059057235718,
0.01750512234866619,
0.03937286511063576,
0.051435839384794235,
0.021011192351579666,
-0.011281835846602917,
0.11866551637649536,
0.13484057784080505,
0.10447502881288528,
-0.015693627297878265,
-0.0293489471077919,
-0.04774824157357216,
-0.01359935849905014,
-0.033305928111076355,
0.03640862926840782,
0.008061517030000687,
-0.14140670001506805,
-0.06158366799354553,
-0.11537835001945496,
0.08758961409330368,
0.10668005049228668,
0.07567808032035828,
-0.21108253300189972,
-0.003950516227632761,
0.07933880388736725,
-0.03630997985601425,
-0.11126025766134262,
0.08416172116994858,
0.03095286712050438,
-0.1277567446231842,
0.07218055427074432,
-0.03519461303949356,
0.12458370625972748,
-0.0032897875644266605,
0.09583556652069092,
-0.03598680719733238,
-0.027483470737934113,
-0.013308011926710606,
0.09818253666162491,
-0.3191508650779724,
0.1621316522359848,
-0.017933005467057228,
-0.0618131123483181,
-0.06667962670326233,
-0.02528184838593006,
0.015994107350707054,
0.07729468494653702,
0.10861869156360626,
0.021759910508990288,
0.01640525460243225,
-0.07345785945653915,
-0.042352862656116486,
0.038021303713321686,
0.12403716146945953,
-0.06827268749475479,
-0.012903391383588314,
-0.04523605480790138,
0.00796645786613226,
-0.017124788835644722,
-0.008793274872004986,
0.006911922711879015,
-0.14962191879749298,
0.05182485654950142,
0.014736213721334934,
0.07058768719434738,
0.0436982735991478,
-0.014969068579375744,
-0.09180716425180435,
0.18274778127670288,
-0.015714606270194054,
-0.07271543145179749,
-0.12616917490959167,
-0.05262751132249832,
0.030376195907592773,
-0.05518756061792374,
0.021047864109277725,
-0.06501689553260803,
-0.0035362408962100744,
-0.06755607575178146,
-0.22007296979427338,
0.1278373897075653,
-0.08437205106019974,
-0.07192739844322205,
-0.04912353679537773,
0.2010866105556488,
-0.051223888993263245,
0.003238252131268382,
0.010222852230072021,
0.021994104608893394,
-0.11474784463644028,
-0.09469719231128693,
0.07112357765436172,
-0.03247172012925148,
0.03123478777706623,
0.0022505864035338163,
-0.04091062396764755,
0.016593176871538162,
-0.06314414739608765,
-0.011381587944924831,
0.27866554260253906,
0.23951324820518494,
-0.040407944470644,
0.1904350072145462,
0.11012271791696548,
-0.08163551241159439,
-0.3069863021373749,
-0.10166139155626297,
-0.12140648066997528,
-0.02996143139898777,
-0.017288926988840103,
-0.16865339875221252,
0.06477722525596619,
0.038930367678403854,
0.009261871688067913,
0.13778774440288544,
-0.20730599761009216,
-0.08823523670434952,
0.09138026833534241,
-0.02557477355003357,
0.43079736828804016,
-0.1257614940404892,
-0.08959750831127167,
-0.051866497844457626,
-0.16516901552677155,
0.2173919379711151,
-0.021592965349555016,
0.07857322692871094,
-0.029561417177319527,
0.11770006269216537,
0.04697660356760025,
-0.010707763023674488,
0.08040876686573029,
-0.00884756539016962,
0.008373050950467587,
-0.12410011142492294,
-0.02768467366695404,
0.04874192550778389,
0.012378438375890255,
0.0013600040692836046,
-0.09389680624008179,
0.020313434302806854,
-0.15990203619003296,
-0.018549781292676926,
-0.11233476549386978,
0.07682323455810547,
0.025788001716136932,
-0.06466120481491089,
-0.003637736663222313,
-0.04986237734556198,
-0.015892893075942993,
-0.01400828268378973,
0.1717434972524643,
-0.04862768203020096,
0.19366511702537537,
0.03501616790890694,
0.11570870876312256,
-0.1362973153591156,
0.06143493950366974,
-0.06429426372051239,
-0.07528600096702576,
0.07427702099084854,
-0.1537967324256897,
0.05111055448651314,
0.09430045634508133,
-0.030276626348495483,
0.05380253866314888,
0.08795086294412613,
-0.003969982732087374,
0.004800081253051758,
0.15867236256599426,
-0.2786487936973572,
0.01320126373320818,
-0.07396841049194336,
-0.06665283441543579,
0.10506758838891983,
0.06261139363050461,
0.17162823677062988,
0.011681869626045227,
-0.056615445762872696,
0.01595049723982811,
0.02499506063759327,
-0.04915530979633331,
0.04529924690723419,
0.008104361593723297,
-0.010991688817739487,
-0.13640300929546356,
0.08699746429920197,
0.0042801909148693085,
-0.1531187742948532,
0.024680746719241142,
0.2155698835849762,
-0.1260155886411667,
-0.10237220674753189,
-0.03444112092256546,
0.08444061875343323,
-0.11519137024879456,
0.01753072999417782,
-0.030764780938625336,
-0.09109894186258316,
0.07448896765708923,
0.15248911082744598,
0.049206193536520004,
0.11775100976228714,
-0.015379221178591251,
-0.011753370985388756,
-0.05147303268313408,
-0.0317845419049263,
0.025745956227183342,
0.017857374623417854,
-0.08257177472114563,
0.06648801267147064,
-0.022109810262918472,
0.14559012651443481,
-0.09791336953639984,
-0.06602771580219269,
-0.1468091756105423,
-0.009785634465515614,
-0.0695481076836586,
-0.09207163751125336,
-0.08133620768785477,
-0.062133077532052994,
0.0010387726360931993,
-0.03962359577417374,
-0.04795864596962929,
-0.0791037380695343,
-0.10289866477251053,
0.009435068815946579,
-0.02305566892027855,
0.03256045654416084,
-0.06115729361772537,
0.007872066460549831,
0.12092912197113037,
-0.028174830600619316,
0.16686207056045532,
0.1458095908164978,
-0.09536580741405487,
0.10568815469741821,
-0.16346460580825806,
-0.08964221179485321,
0.0939340740442276,
-0.01729099079966545,
0.027899714186787605,
0.11666940152645111,
0.014932696707546711,
0.04195788502693176,
0.035977672785520554,
0.06045130267739296,
0.03587699308991432,
-0.11899011582136154,
0.07665140181779861,
0.009481414221227169,
-0.1612047255039215,
-0.06303887814283371,
-0.08555969595909119,
0.030386725440621376,
0.021575886756181717,
0.12225193530321121,
-0.045776769518852234,
0.0887017622590065,
-0.07972796261310577,
0.027257539331912994,
0.02293219044804573,
-0.181223064661026,
-0.047844018787145615,
-0.053065262734889984,
0.032686229795217514,
0.018960151821374893,
0.1893557906150818,
0.027213018387556076,
-0.03697650134563446,
0.04549255222082138,
0.1042066365480423,
0.005313898902386427,
0.004829791374504566,
0.16259528696537018,
0.09423433989286423,
-0.07654286175966263,
-0.12226779758930206,
0.07556461542844772,
0.019673259928822517,
-0.044067107141017914,
0.10607215762138367,
-0.002448870101943612,
0.020163848996162415,
0.06910120695829391,
-0.014892932027578354,
0.034322552382946014,
-0.044286008924245834,
-0.10698256641626358,
-0.023580113425850868,
0.046367425471544266,
0.00669879000633955,
0.12847968935966492,
0.177873894572258,
-0.002574790036305785,
0.025011489167809486,
-0.0363602340221405,
-0.024931130930781364,
-0.13864666223526,
-0.1558164656162262,
-0.06855984032154083,
-0.14875617623329163,
0.012976853176951408,
-0.0915176048874855,
0.04695429280400276,
0.028682325035333633,
0.06887643784284592,
-0.07052405923604965,
0.04384735971689224,
0.06974220275878906,
-0.12065785378217697,
0.09397104382514954,
-0.028081456199288368,
0.03704333305358887,
-0.006730496883392334,
-0.012833851389586926,
-0.10013298690319061,
0.035936567932367325,
-0.01747855544090271,
0.045271266251802444,
-0.04546798765659332,
0.030429324135184288,
-0.1703072488307953,
-0.124412901699543,
-0.04034453630447388,
0.06420420855283737,
-0.06510858237743378,
0.03512151539325714,
0.019115818664431572,
0.013339218683540821,
0.03305599465966225,
0.23020225763320923,
-0.03704051673412323,
-0.02329315058887005,
-0.042310282588005066,
0.16692522168159485,
-0.014016710221767426,
0.08088304847478867,
-0.03037172369658947,
0.0002500463742762804,
-0.08417443931102753,
0.3385351300239563,
0.3027777075767517,
-0.09020252525806427,
0.019915465265512466,
-0.030905582010746002,
0.03936264291405678,
0.11892254650592804,
0.13376617431640625,
0.09784641861915588,
0.2282467782497406,
-0.07217609137296677,
-0.03032243251800537,
-0.020507147535681725,
-0.011079044081270695,
-0.06650827825069427,
0.0879674032330513,
0.02507801540195942,
-0.05553486570715904,
-0.031693898141384125,
0.0812700018286705,
-0.2327648252248764,
0.10665327310562134,
-0.11289316415786743,
-0.1636168211698532,
-0.039189815521240234,
0.0042042857967317104,
0.08908319473266602,
0.015396242961287498,
0.11228121817111969,
0.009163780137896538,
-0.07585213333368301,
0.017798418179154396,
0.028085503727197647,
-0.24201616644859314,
-0.008133855648338795,
0.060310713946819305,
-0.12939085066318512,
-0.004324504639953375,
-0.027167800813913345,
0.007199867628514767,
0.059822265058755875,
0.029368450865149498,
-0.04319324716925621,
-0.001257759635336697,
-0.010450302623212337,
-0.008644461631774902,
-0.011618612334132195,
0.07065588980913162,
0.046958792954683304,
-0.13329142332077026,
0.06869500875473022,
-0.11774353682994843,
0.033477768301963806,
-0.05866728723049164,
-0.015255378559231758,
0.000037100471672602,
0.03460683673620224,
-0.04829782620072365,
0.07058211416006088,
0.07688362896442413,
-0.015606098808348179,
0.000610517687164247,
-0.0802936851978302,
-0.036274004727602005,
-0.019796574488282204,
-0.09252054989337921,
-0.08371094614267349,
-0.13031646609306335,
-0.11573562026023865,
0.1029667928814888,
-0.02224794402718544,
-0.19213621318340302,
0.03111329674720764,
-0.12165344506502151,
0.045619383454322815,
-0.1751558482646942,
0.11076030135154724,
0.08046020567417145,
0.01831907220184803,
0.011516088619828224,
-0.02576824277639389,
0.08821021765470505,
0.11728470027446747,
-0.07783648371696472,
-0.08528783172369003
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.