modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
huggingtweets/gaytoad2
1175e981233f6c5c22918ffa8165aafa7e8c6b78
2021-08-20T04:46:11.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/gaytoad2
3
null
transformers
21,400
--- language: en thumbnail: https://www.huggingtweets.com/gaytoad2/1629434767014/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1428482513417105413/TGlo7HWH_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">العلجوم</div> <div style="text-align: center; font-size: 14px;">@gaytoad2</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from العلجوم. | Data | العلجوم | | --- | --- | | Tweets downloaded | 3232 | | Retweets | 379 | | Short tweets | 1023 | | Tweets kept | 1830 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2w8lap6f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gaytoad2's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34u34diu) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34u34diu/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gaytoad2') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/gerardsans
6719394c0faa6ec702040c726235d37ec9400519
2021-10-19T19:13:05.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/gerardsans
3
null
transformers
21,401
--- language: en thumbnail: https://www.huggingtweets.com/gerardsans/1634670781074/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1431241007421665284/qoHnns8I_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ᐸGerardSans/ᐳ🤣🇬🇧</div> <div style="text-align: center; font-size: 14px;">@gerardsans</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ᐸGerardSans/ᐳ🤣🇬🇧. | Data | ᐸGerardSans/ᐳ🤣🇬🇧 | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 648 | | Short tweets | 586 | | Tweets kept | 2016 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/115pr1rh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gerardsans's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10heg4by) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10heg4by/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gerardsans') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/guywiththepie
b653d668b674d0a707e1f76e295f195bfd4e9ce3
2021-07-29T15:40:25.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/guywiththepie
3
null
transformers
21,402
--- language: en thumbnail: https://www.huggingtweets.com/guywiththepie/1627573203188/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1332093894218182659/-dCbl61O_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">GuyWithThePie (🎂 in 1 week)</div> <div style="text-align: center; font-size: 14px;">@guywiththepie</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from GuyWithThePie (🎂 in 1 week). | Data | GuyWithThePie (🎂 in 1 week) | | --- | --- | | Tweets downloaded | 3204 | | Retweets | 445 | | Short tweets | 422 | | Tweets kept | 2337 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lir19ia/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @guywiththepie's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ru7uv7v) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ru7uv7v/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/guywiththepie') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/heyimheroic
390bbba90813ff0ad494aaa4913262c213aabad4
2021-05-22T06:51:42.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/heyimheroic
3
null
transformers
21,403
--- language: en thumbnail: https://www.huggingtweets.com/heyimheroic/1614216737501/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1344095133348876294/ehT8yba2_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">i'm alice 🤖 AI Bot </div> <div style="font-size: 15px">@heyimheroic bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@heyimheroic's tweets](https://twitter.com/heyimheroic). | Data | Quantity | | --- | --- | | Tweets downloaded | 3205 | | Retweets | 250 | | Short tweets | 317 | | Tweets kept | 2638 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1oztcf0g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @heyimheroic's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3t922fvw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3t922fvw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/heyimheroic') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/hideki_naganuma
6927d47bba1afef6befd8962e6dc10f482249850
2021-05-22T06:52:59.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/hideki_naganuma
3
null
transformers
21,404
--- language: en thumbnail: https://www.huggingtweets.com/hideki_naganuma/1619655487401/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1383312391962718211/ppzzt2V__400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">HIDEKI NAGANUMA|CEO OF FUNKY FRESH BEATS 🤖 AI Bot </div> <div style="font-size: 15px">@hideki_naganuma bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@hideki_naganuma's tweets](https://twitter.com/hideki_naganuma). | Data | Quantity | | --- | --- | | Tweets downloaded | 3227 | | Retweets | 1186 | | Short tweets | 208 | | Tweets kept | 1833 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1766c7eg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hideki_naganuma's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13aynk8e) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13aynk8e/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hideki_naganuma') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/intifada
0fc6093348eb01d18e5ea476814633914b072016
2021-05-22T08:28:08.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/intifada
3
null
transformers
21,405
--- language: en thumbnail: https://www.huggingtweets.com/intifada/1603110719648/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/608132742224568320/x3yrArdT_400x400.png')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Electronic Intifada 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@intifada bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@intifada's tweets](https://twitter.com/intifada). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>3241</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>6</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>0</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>3235</td> </tr> </tbody> </table> [Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1qmm4ybr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @intifada's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/8f4jzilg) for full transparency and reproducibility. At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/8f4jzilg/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/intifada'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets) <!--- random size file -->
huggingtweets/jack
4ff3174beb5dabf80792c648ebb80a141318a9c4
2022-05-23T06:40:42.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/jack
3
null
transformers
21,406
--- language: en thumbnail: http://www.huggingtweets.com/jack/1653287961086/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1115644092329758721/AFjOr-K8_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">jack</div> <div style="text-align: center; font-size: 14px;">@jack</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from jack. | Data | jack | | --- | --- | | Tweets downloaded | 3231 | | Retweets | 1147 | | Short tweets | 817 | | Tweets kept | 1267 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dibfzjll/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jack's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3f3e0roo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3f3e0roo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jack') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/jacksfilms
d8e940ac93e8832ded4ac7d1d74897fbf3ddcb84
2022-05-21T01:18:12.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/jacksfilms
3
null
transformers
21,407
--- language: en thumbnail: http://www.huggingtweets.com/jacksfilms/1653095886748/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1523106752668966913/tWNV2zbS_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">jacksfilms🌹</div> <div style="text-align: center; font-size: 14px;">@jacksfilms</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from jacksfilms🌹. | Data | jacksfilms🌹 | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 97 | | Short tweets | 444 | | Tweets kept | 2708 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hsenlsv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jacksfilms's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ow20675) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ow20675/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jacksfilms') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/jeemstate
b252aff0fd03266fc72d49b097e5d4aaceacaaa9
2021-05-22T09:27:23.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/jeemstate
3
null
transformers
21,408
--- language: en thumbnail: https://www.huggingtweets.com/jeemstate/1614134968037/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1323369978247172097/HXimQ-3i_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">jeemers 🤖 AI Bot </div> <div style="font-size: 15px">@jeemstate bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@jeemstate's tweets](https://twitter.com/jeemstate). | Data | Quantity | | --- | --- | | Tweets downloaded | 3243 | | Retweets | 220 | | Short tweets | 400 | | Tweets kept | 2623 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3053t272/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jeemstate's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vb2sesn) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vb2sesn/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jeemstate') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/kendalljenner
e60101e9260f3f6e1453f1d025106fd22d4004fc
2022-06-12T14:23:25.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/kendalljenner
3
null
transformers
21,409
--- language: en thumbnail: http://www.huggingtweets.com/kendalljenner/1655043799952/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1417948566287360000/4nmMbMAu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Kendall</div> <div style="text-align: center; font-size: 14px;">@kendalljenner</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Kendall. | Data | Kendall | | --- | --- | | Tweets downloaded | 3168 | | Retweets | 614 | | Short tweets | 675 | | Tweets kept | 1879 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2bjob9fa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kendalljenner's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1r9j12kw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1r9j12kw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/kendalljenner') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/leannelleeds-scalzi
a8b11380fea9bee8e194779245b6032bd617bf02
2021-11-29T18:53:24.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/leannelleeds-scalzi
3
null
transformers
21,410
--- language: en thumbnail: https://www.huggingtweets.com/leannelleeds-scalzi/1638211999598/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1210600033885794307/_XOFk1EQ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/3475245194/59992708a306aaa836bf1699f8b47d5d_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Leanne Leeds 👻 & John Scalzi</div> <div style="text-align: center; font-size: 14px;">@leannelleeds-scalzi</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Leanne Leeds 👻 & John Scalzi. | Data | Leanne Leeds 👻 | John Scalzi | | --- | --- | --- | | Tweets downloaded | 2298 | 3249 | | Retweets | 321 | 190 | | Short tweets | 64 | 269 | | Tweets kept | 1913 | 2790 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2swuksao/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @leannelleeds-scalzi's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vhrptd6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vhrptd6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/leannelleeds-scalzi') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/micky_cow
958b1540aab2d690c534a04ed4de590f94a86591
2021-05-22T14:28:03.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/micky_cow
3
null
transformers
21,411
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1362790988356464645/TGSSbvT0_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Micky the cow 🤖 AI Bot </div> <div style="font-size: 15px">@micky_cow bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@micky_cow's tweets](https://twitter.com/micky_cow). | Data | Quantity | | --- | --- | | Tweets downloaded | 135 | | Retweets | 0 | | Short tweets | 15 | | Tweets kept | 120 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ugkdnx6z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @micky_cow's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jfh2mjg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jfh2mjg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/micky_cow') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/narendramodi
019eaa5d093811eec8f28571bcd99be5a50ef302
2022-02-21T17:52:51.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/narendramodi
3
null
transformers
21,412
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1479443900368519169/PgOyX1vt_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Narendra Modi</div> <div style="text-align: center; font-size: 14px;">@narendramodi</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Narendra Modi. | Data | Narendra Modi | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 42 | | Short tweets | 13 | | Tweets kept | 3195 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f7klvhh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @narendramodi's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hyykmdd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hyykmdd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/narendramodi') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/naval-warikoo
21ec2c2f8cd57cb0d877fb2805cb0c6c42ca99ef
2021-08-20T09:56:09.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/naval-warikoo
3
null
transformers
21,413
--- language: en thumbnail: https://www.huggingtweets.com/naval-warikoo/1629453365067/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1156881198582382592/yUbrONnS_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Naval & Ankur Warikoo</div> <div style="text-align: center; font-size: 14px;">@naval-warikoo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Naval & Ankur Warikoo. | Data | Naval | Ankur Warikoo | | --- | --- | --- | | Tweets downloaded | 3248 | 3249 | | Retweets | 149 | 324 | | Short tweets | 640 | 397 | | Tweets kept | 2459 | 2528 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/g5rn77ku/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @naval-warikoo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o3o6mau) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o3o6mau/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/naval-warikoo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/oneonlygriffin
41ccd999fe214e2bc3f66068113c68bc11351e24
2021-05-22T17:29:58.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/oneonlygriffin
3
null
transformers
21,414
--- language: en thumbnail: https://www.huggingtweets.com/oneonlygriffin/1617765611936/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1233963742754459648/GfM8_yrS_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Griffin 🤖 AI Bot </div> <div style="font-size: 15px">@oneonlygriffin bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@oneonlygriffin's tweets](https://twitter.com/oneonlygriffin). | Data | Quantity | | --- | --- | | Tweets downloaded | 3240 | | Retweets | 449 | | Short tweets | 580 | | Tweets kept | 2211 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6li8zjxg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oneonlygriffin's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2opessnr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2opessnr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/oneonlygriffin') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/pareinoia
824deb5fe1a6e2ac80f3fbd12718c49ba38dc1ab
2021-05-22T18:01:20.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/pareinoia
3
null
transformers
21,415
--- language: en thumbnail: https://www.huggingtweets.com/pareinoia/1616618526006/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1350516642049110016/5Fm9kSGJ_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">🥝 kiwi, best of fruits 🥝 🤖 AI Bot </div> <div style="font-size: 15px">@pareinoia bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@pareinoia's tweets](https://twitter.com/pareinoia). | Data | Quantity | | --- | --- | | Tweets downloaded | 1999 | | Retweets | 464 | | Short tweets | 306 | | Tweets kept | 1229 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ca7c493/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pareinoia's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tntgk3a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tntgk3a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/pareinoia') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/phantasyphiend
65c594b6f5de800820fe5d8c4bdcb42b3229450e
2021-05-22T18:35:02.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/phantasyphiend
3
null
transformers
21,416
--- language: en thumbnail: https://www.huggingtweets.com/phantasyphiend/1616698324465/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000130578639/207d4a749cd598bc91c77b9f9599cfaf_400x400.jpeg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Boredom is Strength 🤖 AI Bot </div> <div style="font-size: 15px">@phantasyphiend bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@phantasyphiend's tweets](https://twitter.com/phantasyphiend). | Data | Quantity | | --- | --- | | Tweets downloaded | 3233 | | Retweets | 1236 | | Short tweets | 105 | | Tweets kept | 1892 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3o8six8s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @phantasyphiend's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/367y74jo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/367y74jo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/phantasyphiend') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/philoso_foster
35bfaa5bf228818304e1a854425c0026e5436925
2021-05-22T18:37:27.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/philoso_foster
3
null
transformers
21,417
--- language: en thumbnail: https://www.huggingtweets.com/philoso_foster/1616729629058/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1357526479987331073/YKjgUnEz_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">jen foster, scrunchie rights activist 🤖 AI Bot </div> <div style="font-size: 15px">@philoso_foster bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@philoso_foster's tweets](https://twitter.com/philoso_foster). | Data | Quantity | | --- | --- | | Tweets downloaded | 3241 | | Retweets | 634 | | Short tweets | 410 | | Tweets kept | 2197 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/15lvsiy3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @philoso_foster's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1m83r9mb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1m83r9mb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/philoso_foster') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/praisegodbarbon
4dad11198080245f9899ef93635a6029a754282e
2021-10-24T03:47:17.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/praisegodbarbon
3
null
transformers
21,418
--- language: en thumbnail: https://www.huggingtweets.com/praisegodbarbon/1635047234116/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1381764452098437120/74IgKP07_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Boston Psychology PhD</div> <div style="text-align: center; font-size: 14px;">@praisegodbarbon</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Boston Psychology PhD. | Data | Boston Psychology PhD | | --- | --- | | Tweets downloaded | 3212 | | Retweets | 810 | | Short tweets | 265 | | Tweets kept | 2137 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h4r5tyq8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @praisegodbarbon's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o2225sd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o2225sd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/praisegodbarbon') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/preyproject
7caec8805ae8ba4ecb048ad201cf63023825c6f0
2021-05-22T19:25:30.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/preyproject
3
null
transformers
21,419
--- language: en thumbnail: https://www.huggingtweets.com/preyproject/1602171887142/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1218166964461490176/wzbbjD2z_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Prey 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@preyproject bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@preyproject's tweets](https://twitter.com/preyproject). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>3201</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>173</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>160</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>2868</td> </tr> </tbody> </table> [Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/150pjkc4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @preyproject's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3i2h7bmn) for full transparency and reproducibility. At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3i2h7bmn/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/preyproject'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets) <!--- random size file -->
huggingtweets/prezoh
40cd460e32bb240867db26905556401bf278410e
2022-01-28T00:19:35.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/prezoh
3
null
transformers
21,420
--- language: en thumbnail: http://www.huggingtweets.com/prezoh/1643329170626/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1475922946166251521/ySh4PG3J_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">prezoh</div> <div style="text-align: center; font-size: 14px;">@prezoh</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from prezoh. | Data | prezoh | | --- | --- | | Tweets downloaded | 3248 | | Retweets | 23 | | Short tweets | 909 | | Tweets kept | 2316 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vixstd5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prezoh's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fqwb6pxf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fqwb6pxf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/prezoh') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/projectalpha22
df5b1cee6c9ff18f1a0ca63d430943a361d26000
2021-05-22T19:33:07.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/projectalpha22
3
null
transformers
21,421
--- language: en thumbnail: https://www.huggingtweets.com/projectalpha22/1619426799771/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1252253457756631043/oZnj5yYj_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">ProjectAlpha22 🏳️‍🌈 🤖 AI Bot </div> <div style="font-size: 15px">@projectalpha22 bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@projectalpha22's tweets](https://twitter.com/projectalpha22). | Data | Quantity | | --- | --- | | Tweets downloaded | 3140 | | Retweets | 2683 | | Short tweets | 46 | | Tweets kept | 411 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vu11jz3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @projectalpha22's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/l3kq0669) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/l3kq0669/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/projectalpha22') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/queenofbithynia
9477456976cc49fb17d0f80f935af729fbaa2f19
2021-09-20T18:03:12.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/queenofbithynia
3
null
transformers
21,422
--- language: en thumbnail: https://www.huggingtweets.com/queenofbithynia/1632160988421/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1010627358879932416/0xVVQg3X_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">the needle-felted head of joyce carol oates</div> <div style="text-align: center; font-size: 14px;">@queenofbithynia</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from the needle-felted head of joyce carol oates. | Data | the needle-felted head of joyce carol oates | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 4 | | Short tweets | 30 | | Tweets kept | 3216 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21v0b2yw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @queenofbithynia's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ze5vcu7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ze5vcu7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/queenofbithynia') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/rajcs4
1e0583adecc395682bc5927f6b58c58599ecc539
2021-05-22T20:17:39.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/rajcs4
3
null
transformers
21,423
--- language: en thumbnail: https://www.huggingtweets.com/rajcs4/1605724471297/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1240082056861933569/KlfArvNx_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Rajiv Shah 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@rajcs4 bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@rajcs4's tweets](https://twitter.com/rajcs4). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>709</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>207</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>10</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>492</td> </tr> </tbody> </table> [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qquksx3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rajcs4's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5jjg9ijo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5jjg9ijo/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/rajcs4'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets) <!--- random size file -->
huggingtweets/saxena_puru
ba18a1b1d501476217bc931fa22b85d00a119cda
2021-08-12T02:56:41.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/saxena_puru
3
null
transformers
21,424
--- language: en thumbnail: https://www.huggingtweets.com/saxena_puru/1628736988431/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1127527290970071040/NJIJtY2g_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Puru Saxena</div> <div style="text-align: center; font-size: 14px;">@saxena_puru</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Puru Saxena. | Data | Puru Saxena | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 8 | | Short tweets | 647 | | Tweets kept | 2595 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xkgnnc0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @saxena_puru's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1l7iz1fz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1l7iz1fz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/saxena_puru') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/seanmombo
ea4de59fb27a3f318a5056db4905534be32a18a8
2022-03-23T16:22:13.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/seanmombo
3
null
transformers
21,425
--- language: en thumbnail: http://www.huggingtweets.com/seanmombo/1648052490598/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1494366913090273285/lmJtNNT2_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">mo bombo</div> <div style="text-align: center; font-size: 14px;">@seanmombo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from mo bombo. | Data | mo bombo | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 5 | | Short tweets | 560 | | Tweets kept | 2684 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bl9qwdw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @seanmombo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p8cy5st) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p8cy5st/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/seanmombo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/shamscharania
552401243cd2fbf4c14332061042a713c7fed464
2022-05-05T18:24:58.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/shamscharania
3
null
transformers
21,426
--- language: en thumbnail: http://www.huggingtweets.com/shamscharania/1651775009937/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1436503855861276680/8qzEXb9B_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Shams Charania</div> <div style="text-align: center; font-size: 14px;">@shamscharania</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Shams Charania. | Data | Shams Charania | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 179 | | Short tweets | 6 | | Tweets kept | 3065 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/cqone02p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shamscharania's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bxi3cc8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bxi3cc8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/shamscharania') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/shartitheclown
442f034536a39839ff8900203d45af3c9a4e9280
2021-05-22T22:40:38.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/shartitheclown
3
null
transformers
21,427
--- language: en thumbnail: https://www.huggingtweets.com/shartitheclown/1614136368554/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1362921843292831749/wwbmtSCM_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Franklin 💀✨ 🤖 AI Bot </div> <div style="font-size: 15px">@shartitheclown bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@shartitheclown's tweets](https://twitter.com/shartitheclown). | Data | Quantity | | --- | --- | | Tweets downloaded | 3192 | | Retweets | 1453 | | Short tweets | 164 | | Tweets kept | 1575 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bp8bisb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shartitheclown's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bc8j6l7q) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bc8j6l7q/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/shartitheclown') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/slime_machine
320e1a76afe69d91723c0b18de904da3c88e17dd
2021-12-23T09:54:26.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/slime_machine
3
null
transformers
21,428
--- language: en thumbnail: http://www.huggingtweets.com/slime_machine/1640253262516/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1468034520326701062/LDp_yytu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">rich homie cron</div> <div style="text-align: center; font-size: 14px;">@slime_machine</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from rich homie cron. | Data | rich homie cron | | --- | --- | | Tweets downloaded | 3234 | | Retweets | 590 | | Short tweets | 494 | | Tweets kept | 2150 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28uf2bgx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @slime_machine's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3h5ua6ik) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3h5ua6ik/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/slime_machine') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/stablekwon
8f429d1804281eb152e696b804bb21712bb3e093
2022-05-27T22:11:17.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/stablekwon
3
null
transformers
21,429
--- language: en thumbnail: http://www.huggingtweets.com/stablekwon/1653689473049/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1512936067846270978/SO5a1OMb_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Do Kwon 🌕</div> <div style="text-align: center; font-size: 14px;">@stablekwon</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Do Kwon 🌕. | Data | Do Kwon 🌕 | | --- | --- | | Tweets downloaded | 3241 | | Retweets | 447 | | Short tweets | 680 | | Tweets kept | 2114 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26ij0ppu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stablekwon's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/q6os4sts) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/q6os4sts/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/stablekwon') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/taliasturm
a99c70dadd4d32696c773dd1540eebb985b19892
2021-05-23T00:34:38.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/taliasturm
3
null
transformers
21,430
--- language: en thumbnail: https://www.huggingtweets.com/taliasturm/1617900946245/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1355263497639055361/W68QzpUo_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">asuka quantico soryu 🏴‍☠️ 🤖 AI Bot </div> <div style="font-size: 15px">@taliasturm bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@taliasturm's tweets](https://twitter.com/taliasturm). | Data | Quantity | | --- | --- | | Tweets downloaded | 3231 | | Retweets | 709 | | Short tweets | 306 | | Tweets kept | 2216 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2sxr8nu3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @taliasturm's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3efhta1i) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3efhta1i/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/taliasturm') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/tallfuzzball
36dc6089280fb469209b69796112e35be9b6352d
2022-03-27T10:18:12.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/tallfuzzball
3
null
transformers
21,431
--- language: en thumbnail: http://www.huggingtweets.com/tallfuzzball/1648376287891/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1427037875242094595/9nOa6vhI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">PAPA BARKS MAR 25 - APR 15</div> <div style="text-align: center; font-size: 14px;">@tallfuzzball</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from PAPA BARKS MAR 25 - APR 15. | Data | PAPA BARKS MAR 25 - APR 15 | | --- | --- | | Tweets downloaded | 3244 | | Retweets | 746 | | Short tweets | 765 | | Tweets kept | 1733 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jvsle26/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tallfuzzball's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1dg9rstx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1dg9rstx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tallfuzzball') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/thatstupiddoll
32ba71651c6d7ceb8591a2f45b4f85c0279efa6c
2021-05-23T01:15:35.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/thatstupiddoll
3
null
transformers
21,432
--- language: en thumbnail: https://www.huggingtweets.com/thatstupiddoll/1617902319163/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1226040597779230720/Az4lUGMe_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">ThatStupidDoll 🤖 AI Bot </div> <div style="font-size: 15px">@thatstupiddoll bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@thatstupiddoll's tweets](https://twitter.com/thatstupiddoll). | Data | Quantity | | --- | --- | | Tweets downloaded | 3203 | | Retweets | 1350 | | Short tweets | 485 | | Tweets kept | 1368 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ynlpcpqs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thatstupiddoll's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kfzy9s0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kfzy9s0/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/thatstupiddoll') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/thinkiamsad
9381cef3799c6faf3b62ec52a5dc47fe1b28fea4
2021-05-23T02:08:14.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/thinkiamsad
3
null
transformers
21,433
--- language: en thumbnail: https://www.huggingtweets.com/thinkiamsad/1614110614531/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1277678034577956865/Q2rbCSah_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">🏴i open up my wallet and it's full of blood.🏴 🤖 AI Bot </div> <div style="font-size: 15px">@thinkiamsad bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@thinkiamsad's tweets](https://twitter.com/thinkiamsad). | Data | Quantity | | --- | --- | | Tweets downloaded | 3212 | | Retweets | 2728 | | Short tweets | 38 | | Tweets kept | 446 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/333v8jzi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thinkiamsad's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3sgh1eyx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3sgh1eyx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/thinkiamsad') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/tim_hosgood
ff6a389e3e641b36b1b1e97e32abd90ab25b9ba0
2021-05-23T02:21:16.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/tim_hosgood
3
null
transformers
21,434
--- language: en thumbnail: https://www.huggingtweets.com/tim_hosgood/1616770120572/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1280298808065302532/F7MrU729_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Tim Hosgood 🤖 AI Bot </div> <div style="font-size: 15px">@tim_hosgood bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@tim_hosgood's tweets](https://twitter.com/tim_hosgood). | Data | Quantity | | --- | --- | | Tweets downloaded | 3238 | | Retweets | 383 | | Short tweets | 179 | | Tweets kept | 2676 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/12cym848/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tim_hosgood's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21afj5yw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21afj5yw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tim_hosgood') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/tundeeednut
d8d3cc23bca8f0f9ab71c4ce54bef61dc2701e5c
2021-05-23T02:59:51.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/tundeeednut
3
null
transformers
21,435
--- language: en thumbnail: https://www.huggingtweets.com/tundeeednut/1618647737599/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1183255447782023169/jRr7LNFv_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Tunde Ednut 🤖 AI Bot </div> <div style="font-size: 15px">@tundeeednut bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@tundeeednut's tweets](https://twitter.com/tundeeednut). | Data | Quantity | | --- | --- | | Tweets downloaded | 2767 | | Retweets | 689 | | Short tweets | 46 | | Tweets kept | 2032 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dq56wnm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tundeeednut's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34yo9k1n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34yo9k1n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tundeeednut') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/vanpelt
cb88bdec769af19ed2848a87864ad92068792b5a
2021-05-23T03:32:00.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/vanpelt
3
null
transformers
21,436
--- language: en thumbnail: https://www.huggingtweets.com/vanpelt/1605216961273/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/995187395531161601/4mrM2flB_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Chris Van Pelt (CVP) 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@vanpelt bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@vanpelt's tweets](https://twitter.com/vanpelt). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>813</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>87</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>43</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>683</td> </tr> </tbody> </table> [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1oxi9b39/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vanpelt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bfgtsxu) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bfgtsxu/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/vanpelt'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets) <!--- random size file -->
huggingtweets/verafiedposter
68590bb6de45b1598197faad072949a6a95823ff
2021-05-23T03:44:52.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/verafiedposter
3
null
transformers
21,437
--- language: en thumbnail: https://www.huggingtweets.com/verafiedposter/1616696054193/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276868958600204289/OgyIJae3_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">big black cloud will come 🤖 AI Bot </div> <div style="font-size: 15px">@verafiedposter bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@verafiedposter's tweets](https://twitter.com/verafiedposter). | Data | Quantity | | --- | --- | | Tweets downloaded | 3222 | | Retweets | 226 | | Short tweets | 234 | | Tweets kept | 2762 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2byyqoj9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @verafiedposter's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27jpl5l8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27jpl5l8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/verafiedposter') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
hugo/secret-project-all-w1
b353f740206240a57f6979fb3fd5e9b7ca9ffbe4
2021-12-09T12:18:12.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
hugo
null
hugo/secret-project-all-w1
3
null
transformers
21,438
Entry not found
hugo/secret-project-ms-2
0920a0c8d123f1398575dadeb9bae222bdb47a80
2021-12-07T01:03:32.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
hugo
null
hugo/secret-project-ms-2
3
null
transformers
21,439
Entry not found
husnu/xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-6
6435e4a236a432670d4e3794004698f25d3c4ff8
2022-01-15T05:09:21.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
question-answering
false
husnu
null
husnu/xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-6
3
null
transformers
21,440
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-6 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the Turkish squad dataset. It achieves the following results on the evaluation set: - Loss: 2.8135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 350 | 3.8389 | | 4.4474 | 2.0 | 700 | 3.3748 | | 3.512 | 3.0 | 1050 | 3.0657 | | 3.512 | 4.0 | 1400 | 2.9219 | | 3.1526 | 5.0 | 1750 | 2.8517 | | 2.9972 | 6.0 | 2100 | 2.8135 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
iAmmarTahir/domain-adapted-negation
92f65650110b920b2238aa2f72a94dbde8c775e2
2021-05-20T16:46:43.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers" ]
text-classification
false
iAmmarTahir
null
iAmmarTahir/domain-adapted-negation
3
null
transformers
21,441
Entry not found
ibraheemmoosa/xlmindic-base-uniscript-soham
c1feafbeaf09f88f6bba8675cd99d5770f6bc0b3
2022-01-12T12:28:05.000Z
[ "pytorch", "tf", "jax", "albert", "text-classification", "as", "bn", "gu", "hi", "mr", "ne", "or", "pa", "si", "sa", "bpy", "mai", "bh", "gom", "dataset:oscar", "transformers", "multilingual", "xlmindic", "nlp", "indoaryan", "indicnlp", "iso15919", "transliteration", "license:apache-2.0", "co2_eq_emissions" ]
text-classification
false
ibraheemmoosa
null
ibraheemmoosa/xlmindic-base-uniscript-soham
3
null
transformers
21,442
--- language: - as - bn - gu - hi - mr - ne - or - pa - si - sa - bpy - mai - bh - gom license: apache-2.0 datasets: - oscar tags: - multilingual - albert - xlmindic - nlp - indoaryan - indicnlp - iso15919 - transliteration - text-classification widget: - text : 'cīnēra madhyāñcalē āraō ēkaṭi śaharēra bāsindārā ābāra gharabandī haẏē paṛēchēna. āja maṅgalabāra natuna karē lakaḍāuna–saṁkrānta bidhiniṣēdha jāri haōẏāra para gharē āṭakā paṛēchēna tām̐rā. karōnāra ati saṁkrāmaka natuna dharana amikranēra bistāra ṭhēkātē ēmana padakṣēpa niẏēchē kartr̥pakṣa. khabara bārtā saṁsthā ēēphapira.' co2_eq_emissions: emissions: "0.21 in grams of CO2" source: "calculated using this webstie https://mlco2.github.io/impact/#compute" training_type: "fine-tuning" geographical_location: "NA" hardware_used: "P100 for about 1.5 hours" --- # XLMIndic Base Uniscript This model is finetuned from [this model](https://huggingface.co/ibraheemmoosa/xlmindic-base-uniscript) on Soham Bangla News Classification task which is part of the IndicGLUE benchmark. **Before pretraining this model we transliterate the text to [ISO-15919](https://en.wikipedia.org/wiki/ISO_15919) format using the [Aksharamukha](https://pypi.org/project/aksharamukha/) library.** A demo of Aksharamukha library is hosted [here](https://aksharamukha.appspot.com/converter) where you can transliterate your text and use it on our model on the inference widget. ## Model description This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters - 512 sequence length ## Training data This model was fine-tuned on Soham dataset that is part of the IndicGLUE benchmark. ## Transliteration *The unique component of this model is that it takes in ISO-15919 transliterated text.* The motivation behind this is this. When two languages share vocabularies, a machine learning model can exploit that to learn good cross-lingual representations. However if these two languages use different writing scripts it is difficult for a model to make the connection. Thus if if we can write the two languages in a single script then it is easier for the model to learn good cross-lingual representation. For many of the scripts currently in use, there are standard transliteration schemes to convert to the Latin script. In particular, for the Indic scripts the ISO-15919 transliteration scheme is designed to consistently transliterate texts written in different Indic scripts to the Latin script. An example of ISO-15919 transliteration for a piece of **Bangla** text is the following: **Original:** "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক।" **Transliterated:** 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika.' Another example for a piece of **Hindi** text is the following: **Original:** "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है" **Transliterated:** "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai" ## Training procedure ### Preprocessing The texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000. ### Training The model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*. ## Evaluation results See results specific to Soham in the following table. ### IndicGLUE Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript (This Model) | XLMIndic-Base-Multiscript (Ablation Model) -----| ----- | ----- | ------ | ------- | -------- Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76 Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26 Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58 BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50 Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49 INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69 INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23 IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84 IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20 MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33 Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21 ## Intended uses & limitations This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919). You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use To use this model you will need to first install the [Aksharamukha](https://pypi.org/project/aksharamukha/) library. ```bash pip install aksharamukha ``` Using this library you can transliterate any text wriiten in Indic scripts in the following way: ```python >>> from aksharamukha import transliterate >>> text = "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है" >>> transliterated_text = transliterate.process('autodetect', 'ISO', text) >>> transliterated_text "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai" ``` Then you can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> from aksharamukha import transliterate >>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-uniscript') >>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।" >>> transliterated_text = transliterate.process('Bengali', 'ISO', text) >>> transliterated_text 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli [MASK], aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama [MASK] puraskāra lābha karēna.' >>> unmasker(transliterated_text) [{'score': 0.39705055952072144, 'token': 1500, 'token_str': 'abhinētā', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli abhinētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}, {'score': 0.20499080419540405, 'token': 3585, 'token_str': 'kabi', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}, {'score': 0.1314290314912796, 'token': 15402, 'token_str': 'rājanētā', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli rājanētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}, {'score': 0.060830358415842056, 'token': 3212, 'token_str': 'kalākāra', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kalākāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}, {'score': 0.035522934049367905, 'token': 11586, 'token_str': 'sāhityakāra', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli sāhityakāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}] ``` ### Limitations and bias Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions. ## Contact Feel free to contact us if you have any ideas or if you want to know more about our models. - Ibraheem Muhammad Moosa ([email protected]) - Mahmud Elahi Akhter ([email protected]) - Ashfia Binte Habib ## BibTeX entry and citation info Coming soon!
ietz/bert-base-uncased-finetuned-jira-inteldaos-issue-titles-and-bodies
09b67345cce31cb100218fb559afffed77d7be0d
2022-02-04T08:47:13.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
ietz
null
ietz/bert-base-uncased-finetuned-jira-inteldaos-issue-titles-and-bodies
3
null
transformers
21,443
Entry not found
ifis-zork/ZORK_AI_SCI_FI_TEMP
8056350d3d25980b8cb2e12a19c915b46be8b3d2
2021-07-21T13:06:26.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer" ]
text-generation
false
ifis-zork
null
ifis-zork/ZORK_AI_SCI_FI_TEMP
3
null
transformers
21,444
--- tags: - generated_from_trainer model_index: - name: ZORK_AI_SCI_FI_TEMP results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZORK_AI_SCI_FI_TEMP This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
ilevs/opus-mt-ru-en-finetuned-ru-to-en
f275d28a133677e7398d0bf83890463c76a70946
2022-01-16T19:29:07.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
ilevs
null
ilevs/opus-mt-ru-en-finetuned-ru-to-en
3
null
transformers
21,445
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: opus-mt-ru-en-finetuned-ru-to-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ru-en-finetuned-ru-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1251 - Bleu: 15.9892 - Gen Len: 5.0168 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 2.6914 | 1.0 | 4956 | 2.5116 | 11.1411 | 4.9989 | | 2.2161 | 2.0 | 9912 | 2.3255 | 11.7334 | 5.1678 | | 1.9237 | 3.0 | 14868 | 2.2388 | 13.6802 | 5.1463 | | 1.7087 | 4.0 | 19824 | 2.1892 | 13.8815 | 5.0625 | | 1.5423 | 5.0 | 24780 | 2.1586 | 14.8182 | 5.0779 | | 1.3909 | 6.0 | 29736 | 2.1445 | 14.3603 | 5.2194 | | 1.3041 | 7.0 | 34692 | 2.1323 | 16.2138 | 5.0438 | | 1.2078 | 8.0 | 39648 | 2.1275 | 16.2574 | 5.0165 | | 1.1523 | 9.0 | 44604 | 2.1255 | 16.0368 | 5.014 | | 1.1005 | 10.0 | 49560 | 2.1251 | 15.9892 | 5.0168 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
iliketurtles/distilgpt2-finetuned-wikitext2
7a027d44eee5992e8963894332de9fe71b270edf
2021-12-21T19:51:47.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
iliketurtles
null
iliketurtles/distilgpt2-finetuned-wikitext2
3
null
transformers
21,446
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7608 | 1.0 | 2334 | 3.6655 | | 3.6335 | 2.0 | 4668 | 3.6455 | | 3.6066 | 3.0 | 7002 | 3.6424 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
imosnoi/md_inv
3ab969ee8983c393d43768cd600e54fd7f45722b
2022-01-18T21:20:06.000Z
[ "pytorch", "layoutlm", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
imosnoi
null
imosnoi/md_inv
3
null
transformers
21,447
Entry not found
imrit1999/DialoGPT-small-MCU
9b8d8cd5ee4d254df148f8cf36576b47633ddca5
2021-07-17T23:35:10.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "license:mit" ]
conversational
false
imrit1999
null
imrit1999/DialoGPT-small-MCU
3
null
transformers
21,448
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- # DialoGPT Trained on MCU Dialogues
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz-cv8
04170d650fec365a8e850f52dbcd24f09758d18e
2022-03-23T18:27:00.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ab", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz-cv8
3
null
transformers
21,449
--- language: - ab license: apache-2.0 tags: - ab - automatic-speech-recognition - generated_from_trainer - hf-asr-leaderboard - model_for_talk - mozilla-foundation/common_voice_8_0 - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: XLS-R-300M - Abkhaz results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: ab metrics: - name: Test WER type: wer value: 27.6 - name: Test CER type: cer value: 4.577 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-abkhaz-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 0.1614 - Wer: 0.2907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.2881 | 4.26 | 4000 | 0.3764 | 0.6461 | | 1.0767 | 8.53 | 8000 | 0.2657 | 0.5164 | | 0.9841 | 12.79 | 12000 | 0.2330 | 0.4445 | | 0.9274 | 17.06 | 16000 | 0.2134 | 0.3929 | | 0.8781 | 21.32 | 20000 | 0.1945 | 0.3886 | | 0.8381 | 25.59 | 24000 | 0.1840 | 0.3737 | | 0.8054 | 29.85 | 28000 | 0.1756 | 0.3523 | | 0.7763 | 34.12 | 32000 | 0.1745 | 0.3299 | | 0.7474 | 38.38 | 36000 | 0.1677 | 0.3074 | | 0.7298 | 42.64 | 40000 | 0.1649 | 0.2963 | | 0.7125 | 46.91 | 44000 | 0.1617 | 0.2931 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz
caa4ca38b1f8853a6006453c78694b1cb3a1c9b2
2022-03-23T18:28:58.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ab", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz
3
null
transformers
21,450
--- language: - ab license: apache-2.0 tags: - ab - automatic-speech-recognition - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_7_0 - robust-speech-event datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Abkhaz results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: ab metrics: - name: Test WER type: wer value: 60.07 - name: Test CER type: cer value: 12.5 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-abkhaz This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 0.5359 - Wer: 0.6192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8617 | 22.73 | 500 | 2.6264 | 1.0013 | | 1.2716 | 45.45 | 1000 | 0.6218 | 0.6942 | | 1.049 | 68.18 | 1500 | 0.5442 | 0.6368 | | 0.9632 | 90.91 | 2000 | 0.5364 | 0.6242 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-chuvash
20511b1a37b88f2799682bd8c65adeda58d70cb1
2022-03-24T11:55:42.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "cv", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-chuvash
3
null
transformers
21,451
--- language: - cv license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - cv - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Chuvash results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: cv metrics: - name: Test WER type: wer value: 60.31 - name: Test CER type: cer value: 15.08 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-chuvash This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - CV dataset. It achieves the following results on the evaluation set: - Loss: 0.7651 - Wer: 0.6166 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.8032 | 8.77 | 500 | 0.8059 | 0.8352 | | 1.2608 | 17.54 | 1000 | 0.5828 | 0.6769 | | 1.1337 | 26.32 | 1500 | 0.6892 | 0.6908 | | 1.0457 | 35.09 | 2000 | 0.7077 | 0.6781 | | 0.97 | 43.86 | 2500 | 0.5993 | 0.6228 | | 0.8767 | 52.63 | 3000 | 0.7213 | 0.6604 | | 0.8223 | 61.4 | 3500 | 0.8161 | 0.6968 | | 0.7441 | 70.18 | 4000 | 0.7057 | 0.6184 | | 0.7011 | 78.95 | 4500 | 0.7027 | 0.6024 | | 0.6542 | 87.72 | 5000 | 0.7092 | 0.5979 | | 0.6081 | 96.49 | 5500 | 0.7917 | 0.6324 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-galician
f56555a59071f95a907880e2fbc9df3dc6dfc450
2022-03-23T18:34:49.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "gl", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-galician
3
null
transformers
21,452
--- language: - gl license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer - gl - hf-asr-leaderboard - model_for_talk - mozilla-foundation/common_voice_7_0 - robust-speech-event datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Galician results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7.0 type: mozilla-foundation/common_voice_7_0 args: gl metrics: - name: Test WER type: wer value: 101.54 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: gl metrics: - name: Test WER type: wer value: 105.69 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: gl metrics: - name: Test WER type: wer value: 101.95 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-galician This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GL dataset. It achieves the following results on the evaluation set: - Loss: 0.1525 - Wer: 0.1542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0067 | 4.35 | 500 | 2.9632 | 1.0 | | 1.4939 | 8.7 | 1000 | 0.5005 | 0.4157 | | 0.9982 | 13.04 | 1500 | 0.1967 | 0.1857 | | 0.8726 | 17.39 | 2000 | 0.1587 | 0.1564 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-hausa
bcbb45b0dcd61a87eaf10e819bb02c38cfbd4d5e
2022-03-24T11:58:04.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ha", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-hausa
3
null
transformers
21,453
--- language: - ha license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - ha - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Hausa results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: ha metrics: - name: Test WER type: wer value: 100 - name: Test CER type: cer value: 132.32 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hausa This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HA dataset. It achieves the following results on the evaluation set: - Loss: 0.5756 - Wer: 0.6014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.7064 | 11.36 | 500 | 2.7112 | 1.0 | | 1.3079 | 22.73 | 1000 | 0.7337 | 0.7776 | | 1.0919 | 34.09 | 1500 | 0.5938 | 0.7023 | | 0.9546 | 45.45 | 2000 | 0.5698 | 0.6133 | | 0.8895 | 56.82 | 2500 | 0.5739 | 0.6142 | | 0.8152 | 68.18 | 3000 | 0.5579 | 0.6091 | | 0.7703 | 79.55 | 3500 | 0.5813 | 0.6210 | | 0.732 | 90.91 | 4000 | 0.5756 | 0.5860 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-hungarian
b31e9157bb1a98ca89d992583c3d94b77925a8b3
2022-03-23T18:34:54.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hu", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-hungarian
3
null
transformers
21,454
--- language: - hu license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer - hf-asr-leaderboard - hu - model_for_talk - mozilla-foundation/common_voice_7_0 - robust-speech-event datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Hungarian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: hu metrics: - name: Test WER type: wer value: 31.099 - name: Test CER type: cer value: 6.737 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: hu metrics: - name: Test WER type: wer value: 45.469 - name: Test CER type: cer value: 15.727 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: hu metrics: - name: Test WER type: wer value: 48.2 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hungarian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HU dataset. It achieves the following results on the evaluation set: - Loss: 0.2562 - Wer: 0.3112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.3964 | 3.52 | 1000 | 1.2251 | 0.8781 | | 1.3176 | 7.04 | 2000 | 0.3872 | 0.4462 | | 1.1999 | 10.56 | 3000 | 0.3244 | 0.3922 | | 1.1633 | 14.08 | 4000 | 0.3014 | 0.3704 | | 1.1132 | 17.61 | 5000 | 0.2913 | 0.3623 | | 1.0888 | 21.13 | 6000 | 0.2864 | 0.3498 | | 1.0487 | 24.65 | 7000 | 0.2821 | 0.3435 | | 1.0431 | 28.17 | 8000 | 0.2739 | 0.3308 | | 0.9896 | 31.69 | 9000 | 0.2629 | 0.3243 | | 0.9839 | 35.21 | 10000 | 0.2806 | 0.3308 | | 0.9586 | 38.73 | 11000 | 0.2650 | 0.3235 | | 0.9501 | 42.25 | 12000 | 0.2585 | 0.3173 | | 0.938 | 45.77 | 13000 | 0.2561 | 0.3117 | | 0.921 | 49.3 | 14000 | 0.2559 | 0.3115 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-sakha
8703f5317e998349cef12a9e760420ab12ceb23b
2022-03-24T11:58:14.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "sah", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-sakha
3
null
transformers
21,455
--- language: - sah license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - sah - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Sakha results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: sah metrics: - name: Test WER type: wer value: 44.196 - name: Test CER type: cer value: 10.271 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-sakha This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SAH dataset. It achieves the following results on the evaluation set: - Loss: 0.4995 - Wer: 0.4421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.8597 | 8.47 | 500 | 0.7731 | 0.7211 | | 1.2508 | 16.95 | 1000 | 0.5368 | 0.5989 | | 1.1066 | 25.42 | 1500 | 0.5034 | 0.5533 | | 1.0064 | 33.9 | 2000 | 0.4686 | 0.5114 | | 0.9324 | 42.37 | 2500 | 0.4927 | 0.5056 | | 0.876 | 50.85 | 3000 | 0.4734 | 0.4795 | | 0.8082 | 59.32 | 3500 | 0.4748 | 0.4799 | | 0.7604 | 67.8 | 4000 | 0.4949 | 0.4691 | | 0.7241 | 76.27 | 4500 | 0.5090 | 0.4627 | | 0.6739 | 84.75 | 5000 | 0.4967 | 0.4452 | | 0.6447 | 93.22 | 5500 | 0.5071 | 0.4437 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-slovak
dfbe8085d0963f453c5736b0fe136b4834bab5fe
2022-03-24T11:50:01.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "sk", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-slovak
3
null
transformers
21,456
--- language: - sk license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - sk - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Slovak results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: sk metrics: - name: Test WER type: wer value: 24.852 - name: Test CER type: cer value: 5.09 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: sk metrics: - name: Test WER type: wer value: 56.388 - name: Test CER type: cer value: 20.654 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: sk metrics: - name: Test WER type: wer value: 59.25 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-slovak This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SK dataset. It achieves the following results on the evaluation set: - Loss: 0.2915 - Wer: 0.2481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 3000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0076 | 19.74 | 3000 | 0.3274 | 0.3806 | | 0.6889 | 39.47 | 6000 | 0.2824 | 0.2942 | | 0.5863 | 59.21 | 9000 | 0.2700 | 0.2735 | | 0.4798 | 78.95 | 12000 | 0.2844 | 0.2602 | | 0.4399 | 98.68 | 15000 | 0.2907 | 0.2489 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-slovenian
0c5ac121c2b077a310e67507ee10627e8ff3e91b
2022-03-24T11:49:25.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "sl", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
infinitejoy
null
infinitejoy/wav2vec2-large-xls-r-300m-slovenian
3
null
transformers
21,457
--- language: - sl license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - sl - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Slovenian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: sl metrics: - name: Test WER type: wer value: 18.97 - name: Test CER type: cer value: 4.534 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: sl metrics: - name: Test WER type: wer value: 55.048 - name: Test CER type: cer value: 22.739 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: sl metrics: - name: Test WER type: wer value: 54.81 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-slovenian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SL dataset. It achieves the following results on the evaluation set: - Loss: 0.2093 - Wer: 0.1907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.785 | 12.5 | 1000 | 0.7465 | 0.6812 | | 0.8989 | 25.0 | 2000 | 0.2495 | 0.2732 | | 0.7118 | 37.5 | 3000 | 0.2126 | 0.2284 | | 0.6367 | 50.0 | 4000 | 0.2049 | 0.2049 | | 0.5763 | 62.5 | 5000 | 0.2116 | 0.2055 | | 0.5196 | 75.0 | 6000 | 0.2111 | 0.1910 | | 0.4949 | 87.5 | 7000 | 0.2131 | 0.1931 | | 0.4797 | 100.0 | 8000 | 0.2093 | 0.1907 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
institutogloria/hate-pt-tweet-binary
aa1f23e8296e86e8fc87376dabbf914d1ed87d62
2022-01-03T18:30:39.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
institutogloria
null
institutogloria/hate-pt-tweet-binary
3
null
transformers
21,458
Entry not found
isakbos/Q8BERT_COLA_L_512
fab28d99a74a124cfca129e43048cc632e22d67f
2022-02-23T06:32:53.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
isakbos
null
isakbos/Q8BERT_COLA_L_512
3
null
transformers
21,459
Entry not found
it5/it5-large-ilgiornale-to-repubblica
dede1cdc7aa4d681738a56fa4135ed6255581795
2022-03-09T08:04:16.000Z
[ "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "transformers", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "style-transfer", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
it5
null
it5/it5-large-ilgiornale-to-repubblica
3
null
transformers
21,460
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - ilgiornale - repubblica - style-transfer widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore - headline-headline-consistency-classifier - headline-article-consistency-classifier model-index: - name: it5-large-ilgiornale-to-repubblica results: - task: type: headline-style-transfer-ilgiornale-to-repubblica name: "Headline style transfer (Il Giornale to Repubblica)" dataset: type: gsarti/change_it name: "CHANGE-IT" metrics: - type: rouge1 value: 0.270 name: "Test Rouge1" - type: rouge2 value: 0.089 name: "Test Rouge2" - type: rougeL value: 0.237 name: "Test RougeL" - type: bertscore value: 0.400 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" - type: headline-headline-consistency-classifier value: 0.906 name: "Test Headline-Headline Consistency Accuracy" - type: headline-article-consistency-classifier value: 0.852 name: "Test Headline-Article Consistency Accuracy" co2_eq_emissions: emissions: "51g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Large for News Headline Style Transfer (Il Giornale to Repubblica) 🗞️➡️🗞️ 🇮🇹 This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model The model is trained to generate an headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines g2r = pipeline("text2text-generation", model='it5/it5-large-ilgiornale-to-repubblica') g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-ilgiornale-to-repubblica") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-ilgiornale-to-repubblica") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
itsunoda/wolfbbsRoBERTa-large
899adeb42de47b2ed3cfc571073ff5303aed1fc0
2021-01-11T12:34:29.000Z
[ "pytorch", "tf", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
itsunoda
null
itsunoda/wolfbbsRoBERTa-large
3
null
transformers
21,461
Entry not found
iwontbecreative/rembert
12d126dba046355e5a83a78915cf1a92c9559e0a
2021-07-01T03:14:33.000Z
[ "pytorch", "tf", "rembert", "transformers" ]
null
false
iwontbecreative
null
iwontbecreative/rembert
3
null
transformers
21,462
Entry not found
jacksee/gpt2-finetuned-biochemistry
371b085f630ca8bca2e653c22305dee9f1130b25
2021-10-29T06:02:09.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
jacksee
null
jacksee/gpt2-finetuned-biochemistry
3
null
transformers
21,463
jacobduncan00/hackMIT-finetuned-sst2
687010469a87a7de8df786560a9a95c80b6f94f3
2021-08-24T04:05:25.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer" ]
text-classification
false
jacobduncan00
null
jacobduncan00/hackMIT-finetuned-sst2
3
null
transformers
21,464
--- tags: - generated_from_trainer datasets: - glue metrics: - accuracy model_index: - name: hackMIT-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metric: name: Accuracy type: accuracy value: 0.7970183486238532 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hackMIT-finetuned-sst2 This model is a fine-tuned version of [Blaine-Mason/hackMIT-finetuned-sst2](https://huggingface.co/Blaine-Mason/hackMIT-finetuned-sst2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0046 - Accuracy: 0.7970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.7339491016138283e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 23 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0652 | 1.0 | 1053 | 0.9837 | 0.7970 | | 0.0586 | 2.0 | 2106 | 0.9927 | 0.7959 | | 0.0549 | 3.0 | 3159 | 1.0046 | 0.7970 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
jaketae/fastspeech2-ljspeech
439ffc2606bdc1e3716f4e116d6c17f739ae59b3
2022-04-16T07:27:27.000Z
[ "pytorch", "fastspeech2", "transformers" ]
null
false
jaketae
null
jaketae/fastspeech2-ljspeech
3
null
transformers
21,465
Entry not found
jamescalam/bert-stsb-aug
b432ebb00d7cc1ff40ba94757cebb385871bb7c9
2021-12-17T08:52:21.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
jamescalam
null
jamescalam/bert-stsb-aug
3
null
sentence-transformers
21,466
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # Augmented SBERT STSb This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is used as a demo model within the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp), for the chapter on [In-domain Data Augmentation with BERT](https://www.pinecone.io/learn/data-augmentation/). ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('bert-stsb-aug') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('bert-stsb-aug') model = AutoModel.from_pretrained('bert-stsb-aug') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2059 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 308, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
jamesmullenbach/CLIP_TTP_BERT_Context_250k
e5da0353d8a8c6e2f974f05de1e710b5d39241e9
2021-08-03T19:03:12.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
jamesmullenbach
null
jamesmullenbach/CLIP_TTP_BERT_Context_250k
3
1
transformers
21,467
Entry not found
jannesg/takalane_nbl_roberta
d824f5bdc807eb723d057bfae0a4b8a9175bf2e6
2021-09-22T08:52:01.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "nr", "transformers", "masked-lm", "license:mit", "autotrain_compatible" ]
fill-mask
false
jannesg
null
jannesg/takalane_nbl_roberta
3
null
transformers
21,468
--- language: - nr thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - nr - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Ndebele 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_nbl_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_nbl_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. This is a very low resource language, results may be poor at first. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 318M ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jannesg/takalane_nso_roberta
25e73351d6d5a6b36bcb9fcbb1e1529b7e53c6a4
2021-09-22T08:52:04.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "nso", "transformers", "masked-lm", "license:mit", "autotrain_compatible" ]
fill-mask
false
jannesg
null
jannesg/takalane_nso_roberta
3
null
transformers
21,469
--- language: - nso thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - nso - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Northern Sotho 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_nso_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_nso_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 4746 ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jannesg/takalane_zul_roberta
bb451723252ef6b73cb6e36be5a5e66786a0036b
2021-09-22T08:52:21.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "zul", "transformers", "masked-lm", "license:mit", "autotrain_compatible" ]
fill-mask
false
jannesg
null
jannesg/takalane_zul_roberta
3
null
transformers
21,470
--- language: - zul thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - zul - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Zulu 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_zul_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_zul_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 410000 ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jatinshah/distilbert-base-uncased-finetuned-imdb
9d08d6477f2a6bd495140849d7c038d508ac1931
2022-02-14T04:17:56.000Z
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
jatinshah
null
jatinshah/distilbert-base-uncased-finetuned-imdb
3
null
transformers
21,471
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7091 | 1.0 | 157 | 2.4999 | | 2.5768 | 2.0 | 314 | 2.4239 | | 2.5371 | 3.0 | 471 | 2.4366 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0a0+0aef44c - Datasets 1.18.3 - Tokenizers 0.11.0
jbarry/irish-gpt2
8e3ef58064f8390f14ad8610a89400dc1e20eb4b
2021-10-20T16:40:12.000Z
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
false
jbarry
null
jbarry/irish-gpt2
3
null
transformers
21,472
This model was trained on the OSCAR ga dataset for experimental purposes. The files used for training the tokenizer and model are included in this repository.
jcblaise/bert-tagalog-base-cased-WWM
438d4b07a5bcd039d293ee02fda85655346ef265
2021-11-12T03:21:18.000Z
[ "pytorch", "jax", "bert", "fill-mask", "tl", "transformers", "tagalog", "filipino", "license:gpl-3.0", "autotrain_compatible" ]
fill-mask
false
jcblaise
null
jcblaise/bert-tagalog-base-cased-WWM
3
null
transformers
21,473
--- language: tl tags: - bert - tagalog - filipino license: gpl-3.0 inference: false --- **Deprecation Notice** This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available. Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance. --- # BERT Tagalog Base Cased (Whole Word Masking) Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking. ## Citations All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work: ``` @article{cruz2020establishing, title={Establishing Baselines for Text Classification in Low-Resource Languages}, author={Cruz, Jan Christian Blaise and Cheng, Charibeth}, journal={arXiv preprint arXiv:2005.02068}, year={2020} } @article{cruz2019evaluating, title={Evaluating Language Model Finetuning Techniques for Low-resource Languages}, author={Cruz, Jan Christian Blaise and Cheng, Charibeth}, journal={arXiv preprint arXiv:1907.00409}, year={2019} } ``` ## Data and Other Resources Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com ## Contact If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
jegormeister/bert-base-dutch-cased
a134403b141aaea1a945e742f8ca331416f3bc32
2021-08-05T19:28:55.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
jegormeister
null
jegormeister/bert-base-dutch-cased
3
2
sentence-transformers
21,474
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # bert-base-dutch-cased-snli This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('bert-base-dutch-cased-snli') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('bert-base-dutch-cased-snli') model = AutoModel.from_pretrained('bert-base-dutch-cased-snli') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=bert-base-dutch-cased-snli) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 339 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 1, "evaluation_steps": 0, "evaluator": "utils.CombEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jery33/distilbert-base-uncased-finetuned-cola
c3d097a7fa07b1f86ab79feb551ddec1ab1d164c
2021-12-13T12:09:54.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
jery33
null
jery33/distilbert-base-uncased-finetuned-cola
3
null
transformers
21,475
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5373281885173845 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7637 - Matthews Correlation: 0.5373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5306 | 1.0 | 535 | 0.5156 | 0.4063 | | 0.3524 | 2.0 | 1070 | 0.5249 | 0.5207 | | 0.2417 | 3.0 | 1605 | 0.6514 | 0.5029 | | 0.1762 | 4.0 | 2140 | 0.7637 | 0.5373 | | 0.1252 | 5.0 | 2675 | 0.8746 | 0.5291 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_1_Epochs
4f291cd0cb74d6f391d6a649909624cccf603d35
2022-02-12T20:28:53.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
jfarray
null
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_1_Epochs
3
null
sentence-transformers
21,476
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jgammack/SAE-distilbert-base-uncased
c587ef52876943df7c05ca219a1d8c2268a77eef
2022-02-09T15:32:40.000Z
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
jgammack
null
jgammack/SAE-distilbert-base-uncased
3
null
transformers
21,477
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: SAE-distilbert-base-uncased results: [] widget: - text: "Wind noise was detected coming from the car [MASK] closure system." example_title: "Closure system" --- # SAE-distilbert-base-uncased This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [jgammack/SAE-door-abstracts](https://huggingface.co/datasets/jgammack/SAE-door-abstracts) dataset. It achieves the following results on the evaluation set: - Loss: 2.2970 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 15 - eval_batch_size: 15 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5323 | 1.0 | 37 | 2.4503 | | 2.4968 | 2.0 | 74 | 2.4571 | | 2.4688 | 3.0 | 111 | 2.4099 | | 2.419 | 4.0 | 148 | 2.3343 | | 2.4229 | 5.0 | 185 | 2.3072 | | 2.4067 | 6.0 | 222 | 2.2927 | | 2.3877 | 7.0 | 259 | 2.2836 | | 2.374 | 8.0 | 296 | 2.3767 | | 2.3582 | 9.0 | 333 | 2.2493 | | 2.356 | 10.0 | 370 | 2.2847 | | 2.3294 | 11.0 | 407 | 2.3234 | | 2.3358 | 12.0 | 444 | 2.2660 | | 2.3414 | 13.0 | 481 | 2.2887 | | 2.3154 | 14.0 | 518 | 2.3737 | | 2.311 | 15.0 | 555 | 2.2686 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
ji-xin/bert_base-RTE-two_stage
b18280aa68741563e0008f04d2d0a809130a8fb7
2020-07-08T14:54:15.000Z
[ "pytorch", "transformers" ]
null
false
ji-xin
null
ji-xin/bert_base-RTE-two_stage
3
null
transformers
21,478
Entry not found
ji-xin/roberta_large-MRPC-two_stage
77db3a295762646b99a219c48a70254d153014fd
2020-07-08T15:03:50.000Z
[ "pytorch", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
ji-xin
null
ji-xin/roberta_large-MRPC-two_stage
3
null
transformers
21,479
Entry not found
jimregan/BERTreach
76a4f2431e9d338594c678401a2c564af2110e4f
2021-12-01T20:51:13.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "ga", "transformers", "irish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
jimregan
null
jimregan/BERTreach
3
null
transformers
21,480
--- license: apache-2.0 language: ga tags: - irish --- ## BERTreach ([beirtreach](https://www.teanglann.ie/en/fgb/beirtreach) means 'oyster bed') **Model size:** 84M **Training data:** * [PARSEME 1.2](https://gitlab.com/parseme/parseme_corpus_ga/-/blob/master/README.md) * Newscrawl 300k portion of the [Leipzig Corpora](https://wortschatz.uni-leipzig.de/en/download/irish) * Private news corpus crawled with [Corpus Crawler](https://github.com/google/corpuscrawler) (2125804 sentences, 47419062 tokens, as reckoned by wc) ``` from transformers import pipeline fill_mask = pipeline("fill-mask", model="jimregan/BERTreach", tokenizer="jimregan/BERTreach") ```
jimregan/wav2vec2-large-xls-r-300m-irish-colab
8308a74cfd7ab2de64d6b5a6f6897b8e3b678adf
2021-11-30T17:53:09.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jimregan
null
jimregan/wav2vec2-large-xls-r-300m-irish-colab
3
null
transformers
21,481
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-irish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-irish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.4286 - Wer: 0.5097 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 210 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 4.3406 | 24.97 | 400 | 1.1677 | 0.7270 | | 0.2527 | 49.97 | 800 | 1.2686 | 0.5927 | | 0.0797 | 74.97 | 1200 | 1.3970 | 0.5769 | | 0.0424 | 99.97 | 1600 | 1.4093 | 0.5600 | | 0.0286 | 124.97 | 2000 | 1.3684 | 0.5407 | | 0.0174 | 149.97 | 2400 | 1.4571 | 0.5205 | | 0.0109 | 174.97 | 2800 | 1.4327 | 0.5178 | | 0.0072 | 199.97 | 3200 | 1.4286 | 0.5097 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.13.3 - Tokenizers 0.10.3
jimregan/wav2vec2-large-xlsr-irish-basic
89822e125a46bffd96f8024eff764360d3ceae8a
2021-09-22T08:52:55.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "ga", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jimregan
null
jimregan/wav2vec2-large-xlsr-irish-basic
3
null
transformers
21,482
--- language: ga datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Irish by Jim O'Regan results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ga-IE type: common_voice args: ga-IE metrics: - name: Test WER type: wer value: 47.4 --- # Wav2Vec2-Large-XLSR-Irish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [Irish Common Voice dataset](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic") model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Irish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ga-IE", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic") model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic") model.to("cuda") # So, tolower() for Irish is a bit complicated: tAthar -> t-athair # toupper() is non-deterministic :) def is_upper_vowel(letter): if letter in ['A', 'E', 'I', 'O', 'U', 'Á', 'É', 'Í', 'Ó', 'Ú']: return True else: return False def irish_lower(word): if len(word) > 1 and word[0] in ['n', 't'] and is_upper_vowel(word[1]): return word[0] + '-' + word[1:].lower() else: return word.lower() def irish_lower_sentence(sentence): return " ".join([irish_lower(w) for w in sentence.split(" ")]) chars_to_ignore_regex = '[,\?\.\!\;\:\"\“\%\‘\”\(\)\*]' def remove_special_characters(sentence): tmp = re.sub('’ ', ' ', sentence) tmp = re.sub("’$", '', tmp) tmp = re.sub('’', '\'', tmp) tmp = re.sub(chars_to_ignore_regex, '', tmp) sentence = irish_lower_sentence(tmp) + ' ' return sentence resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = remove_special_characters(batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 43.7 % ## Training The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/irish/fine-tune-xlsr-wav2vec2-on-irish-asr-with-transformers.ipynb)
jinlmsft/t5-large-multiwoz
e1241494948b72b1de8ea4dd019be18f27888dbc
2022-02-04T23:08:18.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
jinlmsft
null
jinlmsft/t5-large-multiwoz
3
null
transformers
21,483
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-large-multiwoz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-large-multiwoz This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0064 - Acc: 1.0 - True Num: 56671 - Num: 56776 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Acc | True Num | Num | |:-------------:|:-----:|:----:|:---------------:|:----:|:--------:|:-----:| | 0.1261 | 1.13 | 1000 | 0.0933 | 0.98 | 55574 | 56776 | | 0.0951 | 2.25 | 2000 | 0.0655 | 0.98 | 55867 | 56776 | | 0.0774 | 3.38 | 3000 | 0.0480 | 0.99 | 56047 | 56776 | | 0.0584 | 4.51 | 4000 | 0.0334 | 0.99 | 56252 | 56776 | | 0.042 | 5.64 | 5000 | 0.0222 | 0.99 | 56411 | 56776 | | 0.0329 | 6.76 | 6000 | 0.0139 | 1.0 | 56502 | 56776 | | 0.0254 | 7.89 | 7000 | 0.0094 | 1.0 | 56626 | 56776 | | 0.0214 | 9.02 | 8000 | 0.0070 | 1.0 | 56659 | 56776 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
jnz/electra-ka-anti-gov
40b95a4132c49d2dbb631f8e28b5b5ca194f5c3a
2021-03-30T14:03:28.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
jnz
null
jnz/electra-ka-anti-gov
3
null
transformers
21,484
Entry not found
jnz/electra-ka-discrediting
00af37e91a65376c43a4c3b0d58bbe9ed0c5ddb6
2021-03-30T14:01:41.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
jnz
null
jnz/electra-ka-discrediting
3
null
transformers
21,485
Entry not found
jnz/electra-ka-fake-news-tagging
be9f716f2ed1d27d34ff27c82e827efbf9043126
2020-11-15T20:41:39.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
jnz
null
jnz/electra-ka-fake-news-tagging
3
null
transformers
21,486
Entry not found
jogonba2/POCTS
02db7ec3ae4f36068d5f17ebd5d27507624c4aae
2021-09-21T09:35:25.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "summarization", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
jogonba2
null
jogonba2/POCTS
3
null
transformers
21,487
--- license: apache-2.0 tags: - summarization metrics: - rouge model-index: - name: POCTS results: - task: name: Summarization type: summarization metrics: - name: Rouge1 type: rouge value: 26.1391 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # POCTS This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0970 - Rouge1: 26.1391 - Rouge2: 7.3101 - Rougel: 19.1217 - Rougelsum: 21.9706 - Gen Len: 46.2245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.3259 | 1.0 | 33875 | 3.2535 | 17.942 | 4.5143 | 14.2766 | 15.582 | 19.3901 | | 2.9764 | 2.0 | 67750 | 3.1278 | 18.6558 | 5.1844 | 15.0939 | 16.3367 | 19.9174 | | 2.5889 | 3.0 | 101625 | 3.0970 | 19.1763 | 5.4517 | 15.5342 | 16.7186 | 19.8855 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1+cu110 - Datasets 1.11.0 - Tokenizers 0.10.3
joheras/anglosaxon
394f4adae712b238c397390df23e421895f2abd6
2021-06-23T05:50:15.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
joheras
null
joheras/anglosaxon
3
null
transformers
21,488
Entry not found
johnpaulbin/cvai-bert-asag
774a74a695af791a33cc7d4780a981f6c5d8baf2
2021-12-13T23:17:55.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
johnpaulbin
null
johnpaulbin/cvai-bert-asag
3
null
transformers
21,489
Entry not found
johnpaulbin/cvai-deberta3-asag
ad17d5a741c2631d8bd43423dcba41b377fe505a
2021-12-14T02:16:56.000Z
[ "pytorch", "deberta-v2", "text-classification", "transformers" ]
text-classification
false
johnpaulbin
null
johnpaulbin/cvai-deberta3-asag
3
null
transformers
21,490
Entry not found
johnpaulbin/gpt2-skript-80-v3
250ece8dc5d0fdea0ec569021059bf747720719b
2021-07-18T04:53:22.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
johnpaulbin
null
johnpaulbin/gpt2-skript-80-v3
3
null
transformers
21,491
GPT-2 Skript 80k lines. v3 Training loss: `0.594200` 1.5 GB Inferencing colab: https://colab.research.google.com/drive/1uTAPLa1tuNXFpG0qVLSseMro6iU9-xNc
jonatasgrosman/bartuque-bart-base-pretrained-rm-2
58510ab69835b386923f846c54c8d4bc86852460
2021-02-25T23:07:45.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
jonatasgrosman
null
jonatasgrosman/bartuque-bart-base-pretrained-rm-2
3
null
transformers
21,492
Just a test
joaoalvarenga/wav2vec2-large-100k-voxpopuli-pt
f88a308a9d5897ad4ded4879fb2b386c8ecf8884
2021-07-06T09:11:37.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:common_voice", "transformers", "audio", "speech", "apache-2.0", "portuguese-speech-corpus", "PyTorch", "voxpopuli", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
joaoalvarenga
null
joaoalvarenga/wav2vec2-large-100k-voxpopuli-pt
3
null
transformers
21,493
--- language: pt datasets: - common_voice metrics: - wer tags: - audio - speech - wav2vec2 - pt - apache-2.0 - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - voxpopuli license: apache-2.0 model-index: - name: JoaoAlvarenga Wav2Vec2 Large 100k VoxPopuli Portuguese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice pt type: common_voice args: pt metrics: - name: Test WER type: wer value: 19.735723% --- # Wav2Vec2-Large-100k-VoxPopuli-Portuguese Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. You need to install Enelvo, an open-source spell correction trained with Twitter user posts `pip install enelvo` ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from enelvo import normaliser import re test_dataset = load_dataset("common_voice", "pt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) norm = normaliser.Normaliser() # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = [norm.normalise(i) for i in processor.batch_decode(pred_ids)] return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (wer)**: 19.735723% ## Training The Common Voice `train`, `validation` datasets were used for training.
jsfoon/slogan-gptneo
d0fd128b1b0ca522dabe705a433a0ec277e1337a
2021-08-04T12:23:47.000Z
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
false
jsfoon
null
jsfoon/slogan-gptneo
3
null
transformers
21,494
Entry not found
jsnfly/wav2vec2-xls-r-1b-de-cv8
41c16f10bb2dd6bb64c4c22f8a571c5415e11c14
2022-03-23T18:26:40.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jsnfly
null
jsnfly/wav2vec2-xls-r-1b-de-cv8
3
null
transformers
21,495
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: XLS-R-1B - German results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: de metrics: - name: Test WER type: wer value: 11.37 - name: Test CER type: cer value: 2.89 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: de metrics: - name: Dev WER type: wer value: 31.16 - name: Dev CER type: cer value: 13.41 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: de metrics: - name: Test WER type: wer value: 36.79 --- # XLS-R-1b-DE This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset. (See `run.sh` for training parameters).
julien-c/timm-dpn92
5411e038a5952ec38968e55c6c9b7762a45d0110
2021-02-18T11:18:56.000Z
[ "pytorch", "dataset:imagenet", "arxiv:1707.01629", "arxiv:1906.02659", "arxiv:2010.15052", "timm", "image-classification", "dpn", "license:apache-2.0" ]
image-classification
false
julien-c
null
julien-c/timm-dpn92
3
null
timm
21,496
--- tags: - image-classification - timm - dpn license: apache-2.0 datasets: - imagenet --- # `dpn92` from `rwightman/pytorch-image-models` From [`rwightman/pytorch-image-models`](https://github.com/rwightman/pytorch-image-models): ``` """ PyTorch implementation of DualPathNetworks Based on original MXNet implementation https://github.com/cypw/DPNs with many ideas from another PyTorch implementation https://github.com/oyam/pytorch-DPNs. This implementation is compatible with the pretrained weights from cypw's MXNet implementation. Hacked together by / Copyright 2020 Ross Wightman """ ``` ## Model description [Dual Path Networks](https://arxiv.org/abs/1707.01629) ## Intended uses & limitations You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head to fine-tune it on a downstream task (another classification task with different labels, image segmentation or object detection, to name a few). ### How to use You can use this model with the usual factory method in `timm`: ```python import PIL import timm import torch model = timm.create_model("julien-c/timm-dpn92") img = PIL.Image.open(path_to_an_image) img = img.convert("RGB") config = model.default_cfg if isinstance(config["input_size"], tuple): img_size = config["input_size"][-2:] else: img_size = config["input_size"] transform = timm.data.transforms_factory.transforms_imagenet_eval( img_size=img_size, interpolation=config["interpolation"], mean=config["mean"], std=config["std"], ) input_tensor = transform(cat_img) input_tensor = input_tensor.unsqueeze(0) # ^ batch size = 1 with torch.no_grad(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ### Limitations and bias The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will probably not generalize well on drawings or images containing multiple objects with different labels. The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or models created by fine-tuning this model will work better on images picturing scenes from these countries (see [this paper](https://arxiv.org/abs/1906.02659) for examples). More generally, [recent research](https://arxiv.org/abs/2010.15052) has shown that even models trained in an unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in the training images. ## Training data This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 millions of hand-annotated images with 1,000 categories. ## Training procedure To be completed ### Preprocessing To be completed ## Evaluation results To be completed ### BibTeX entry and citation info ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ``` and ```bibtex @misc{chen2017dual, title={Dual Path Networks}, author={Yunpeng Chen and Jianan Li and Huaxin Xiao and Xiaojie Jin and Shuicheng Yan and Jiashi Feng}, year={2017}, eprint={1707.01629}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
juliensimon/autonlp-imdb-demo-hf-16622767
3b69f57eb6f3ff4fbd2b3619e1c51b225b6146fc
2021-10-11T12:38:37.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:juliensimon/autonlp-data-imdb-demo-hf", "transformers", "autonlp" ]
text-classification
false
juliensimon
null
juliensimon/autonlp-imdb-demo-hf-16622767
3
null
transformers
21,497
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - juliensimon/autonlp-data-imdb-demo-hf --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 16622767 ## Validation Metrics - Loss: 0.20029613375663757 - Accuracy: 0.9256 - Precision: 0.9090909090909091 - Recall: 0.9466984884645983 - AUC: 0.979257749523025 - F1: 0.9275136399064692 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622767 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622767", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622767", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
junnyu/electra_small_generator
b3e6c6288125df8de0900e6584ab194a8f6f476a
2021-09-22T08:54:18.000Z
[ "pytorch", "electra", "fill-mask", "en", "dataset:openwebtext", "transformers", "masked-lm", "license:mit", "autotrain_compatible" ]
fill-mask
false
junnyu
null
junnyu/electra_small_generator
3
null
transformers
21,498
--- language: en thumbnail: https://github.com/junnyu tags: - pytorch - electra - masked-lm license: mit datasets: - openwebtext --- # 一、 个人在openwebtext数据集上训练得到的electra-small模型 # 二、 复现结果(dev dataset) |Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.| |---|---|---|---|---|---|---|---|---|---| |ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36| |**ELECTRA-Small-OWT (this)**| 55.82 |89.67|87.0|86.96|89.28|80.08|87.50|66.07|80.30| # 三、 训练细节 - 数据集 openwebtext - 训练batch_size 256 - 学习率lr 5e-4 - 最大句子长度max_seqlen 128 - 训练total step 62.5W - GPU RTX3090 - 训练时间总共耗费2.5天 # 四、 使用 ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="junnyu/electra_small_generator", tokenizer="junnyu/electra_small_generator" ) print( fill_mask("HuggingFace is creating a [MASK] that the community uses to solve NLP tasks.") ) ```
junnyu/uer_large
78a0e00c4f9506fc72d05f322435b72823dfdcf0
2021-07-21T08:42:35.000Z
[ "pytorch", "bert", "fill-mask", "zh", "transformers", "autotrain_compatible" ]
fill-mask
false
junnyu
null
junnyu/uer_large
3
2
transformers
21,499
--- language: zh tags: - bert - pytorch widget: - text: "巴黎是[MASK]国的首都。" --- https://github.com/dbiir/UER-py/wiki/Modelzoo 中的 MixedCorpus+BertEncoder(large)+MlmTarget https://share.weiyun.com/5G90sMJ Pre-trained on mixed large Chinese corpus. The configuration file is bert_large_config.json ## 引用 ```tex @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ```